00:00:00.000 Started by upstream project "autotest-per-patch" build number 126247 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.019 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.020 The recommended git tool is: git 00:00:00.020 using credential 00000000-0000-0000-0000-000000000002 00:00:00.023 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.038 Fetching changes from the remote Git repository 00:00:00.044 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.085 Using shallow fetch with depth 1 00:00:00.085 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.085 > git --version # timeout=10 00:00:00.105 > git --version # 'git version 2.39.2' 00:00:00.105 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.141 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.141 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.031 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.043 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.057 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.057 > git config core.sparsecheckout # timeout=10 00:00:03.068 > git read-tree -mu HEAD # timeout=10 00:00:03.085 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.106 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.106 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.315 [Pipeline] Start of Pipeline 00:00:03.335 [Pipeline] library 00:00:03.337 Loading library shm_lib@master 00:00:03.338 Library shm_lib@master is cached. Copying from home. 00:00:03.356 [Pipeline] node 00:00:03.363 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:03.369 [Pipeline] { 00:00:03.384 [Pipeline] catchError 00:00:03.386 [Pipeline] { 00:00:03.404 [Pipeline] wrap 00:00:03.417 [Pipeline] { 00:00:03.427 [Pipeline] stage 00:00:03.430 [Pipeline] { (Prologue) 00:00:03.451 [Pipeline] echo 00:00:03.453 Node: VM-host-WFP1 00:00:03.458 [Pipeline] cleanWs 00:00:03.466 [WS-CLEANUP] Deleting project workspace... 00:00:03.466 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.472 [WS-CLEANUP] done 00:00:03.634 [Pipeline] setCustomBuildProperty 00:00:03.704 [Pipeline] httpRequest 00:00:03.721 [Pipeline] echo 00:00:03.722 Sorcerer 10.211.164.101 is alive 00:00:03.727 [Pipeline] httpRequest 00:00:03.730 HttpMethod: GET 00:00:03.730 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.731 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.732 Response Code: HTTP/1.1 200 OK 00:00:03.732 Success: Status code 200 is in the accepted range: 200,404 00:00:03.732 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.877 [Pipeline] sh 00:00:04.156 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.168 [Pipeline] httpRequest 00:00:04.179 [Pipeline] echo 00:00:04.180 Sorcerer 10.211.164.101 is alive 00:00:04.187 [Pipeline] httpRequest 00:00:04.189 HttpMethod: GET 00:00:04.190 URL: http://10.211.164.101/packages/spdk_20d0fd684c54acc931f7cf1fb68a7e967cfec7bd.tar.gz 00:00:04.191 Sending request to url: http://10.211.164.101/packages/spdk_20d0fd684c54acc931f7cf1fb68a7e967cfec7bd.tar.gz 00:00:04.192 Response Code: HTTP/1.1 200 OK 00:00:04.192 Success: Status code 200 is in the accepted range: 200,404 00:00:04.193 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_20d0fd684c54acc931f7cf1fb68a7e967cfec7bd.tar.gz 00:00:18.175 [Pipeline] sh 00:00:18.454 + tar --no-same-owner -xf spdk_20d0fd684c54acc931f7cf1fb68a7e967cfec7bd.tar.gz 00:00:20.996 [Pipeline] sh 00:00:21.276 + git -C spdk log --oneline -n5 00:00:21.276 20d0fd684 sock: add spdk_sock_get_interface_name 00:00:21.276 06cc9fb0c build: fix unit test builds that directly use env_dpdk 00:00:21.276 406b3b1b5 util: allow NULL saddr/caddr for spdk_net_getaddr 00:00:21.276 1053f1b13 util: don't allow users to pass caddr/cport for listen sockets 00:00:21.277 0663932f5 util: add spdk_net_getaddr 00:00:21.298 [Pipeline] writeFile 00:00:21.317 [Pipeline] sh 00:00:21.597 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:21.608 [Pipeline] sh 00:00:21.921 + cat autorun-spdk.conf 00:00:21.921 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:21.921 SPDK_TEST_NVMF=1 00:00:21.921 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:21.921 SPDK_TEST_URING=1 00:00:21.921 SPDK_TEST_USDT=1 00:00:21.921 SPDK_RUN_UBSAN=1 00:00:21.921 NET_TYPE=virt 00:00:21.921 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:21.927 RUN_NIGHTLY=0 00:00:21.930 [Pipeline] } 00:00:21.947 [Pipeline] // stage 00:00:21.964 [Pipeline] stage 00:00:21.966 [Pipeline] { (Run VM) 00:00:21.980 [Pipeline] sh 00:00:22.262 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:22.262 + echo 'Start stage prepare_nvme.sh' 00:00:22.262 Start stage prepare_nvme.sh 00:00:22.262 + [[ -n 4 ]] 00:00:22.262 + disk_prefix=ex4 00:00:22.262 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:22.262 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:22.262 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:22.262 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:22.262 ++ SPDK_TEST_NVMF=1 00:00:22.262 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:22.262 ++ SPDK_TEST_URING=1 00:00:22.262 ++ SPDK_TEST_USDT=1 00:00:22.262 ++ SPDK_RUN_UBSAN=1 00:00:22.262 ++ NET_TYPE=virt 00:00:22.262 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:22.262 ++ RUN_NIGHTLY=0 00:00:22.262 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:22.262 + nvme_files=() 00:00:22.262 + declare -A nvme_files 00:00:22.262 + backend_dir=/var/lib/libvirt/images/backends 00:00:22.262 + nvme_files['nvme.img']=5G 00:00:22.262 + nvme_files['nvme-cmb.img']=5G 00:00:22.262 + nvme_files['nvme-multi0.img']=4G 00:00:22.262 + nvme_files['nvme-multi1.img']=4G 00:00:22.262 + nvme_files['nvme-multi2.img']=4G 00:00:22.262 + nvme_files['nvme-openstack.img']=8G 00:00:22.262 + nvme_files['nvme-zns.img']=5G 00:00:22.262 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:22.262 + (( SPDK_TEST_FTL == 1 )) 00:00:22.262 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:22.262 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:22.262 + for nvme in "${!nvme_files[@]}" 00:00:22.262 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:22.262 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:22.262 + for nvme in "${!nvme_files[@]}" 00:00:22.262 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:22.262 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:22.262 + for nvme in "${!nvme_files[@]}" 00:00:22.262 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:22.262 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:22.262 + for nvme in "${!nvme_files[@]}" 00:00:22.262 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:22.262 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:22.262 + for nvme in "${!nvme_files[@]}" 00:00:22.262 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:22.262 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:22.262 + for nvme in "${!nvme_files[@]}" 00:00:22.262 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:22.521 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:22.521 + for nvme in "${!nvme_files[@]}" 00:00:22.521 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:22.521 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:22.521 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:22.521 + echo 'End stage prepare_nvme.sh' 00:00:22.521 End stage prepare_nvme.sh 00:00:22.532 [Pipeline] sh 00:00:22.814 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:22.814 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:00:22.814 00:00:22.814 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:22.814 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:22.814 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:22.814 HELP=0 00:00:22.814 DRY_RUN=0 00:00:22.814 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:22.814 NVME_DISKS_TYPE=nvme,nvme, 00:00:22.814 NVME_AUTO_CREATE=0 00:00:22.814 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:22.814 NVME_CMB=,, 00:00:22.814 NVME_PMR=,, 00:00:22.814 NVME_ZNS=,, 00:00:22.814 NVME_MS=,, 00:00:22.814 NVME_FDP=,, 00:00:22.814 SPDK_VAGRANT_DISTRO=fedora38 00:00:22.814 SPDK_VAGRANT_VMCPU=10 00:00:22.814 SPDK_VAGRANT_VMRAM=12288 00:00:22.814 SPDK_VAGRANT_PROVIDER=libvirt 00:00:22.814 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:22.814 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:22.814 SPDK_OPENSTACK_NETWORK=0 00:00:22.814 VAGRANT_PACKAGE_BOX=0 00:00:22.814 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:22.814 FORCE_DISTRO=true 00:00:22.814 VAGRANT_BOX_VERSION= 00:00:22.814 EXTRA_VAGRANTFILES= 00:00:22.814 NIC_MODEL=e1000 00:00:22.814 00:00:22.814 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:22.814 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:25.369 Bringing machine 'default' up with 'libvirt' provider... 00:00:26.747 ==> default: Creating image (snapshot of base box volume). 00:00:26.747 ==> default: Creating domain with the following settings... 00:00:26.747 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721075807_80bc73826435a8a1c1d1 00:00:26.747 ==> default: -- Domain type: kvm 00:00:26.747 ==> default: -- Cpus: 10 00:00:26.747 ==> default: -- Feature: acpi 00:00:26.747 ==> default: -- Feature: apic 00:00:26.747 ==> default: -- Feature: pae 00:00:26.747 ==> default: -- Memory: 12288M 00:00:26.747 ==> default: -- Memory Backing: hugepages: 00:00:26.747 ==> default: -- Management MAC: 00:00:26.747 ==> default: -- Loader: 00:00:26.747 ==> default: -- Nvram: 00:00:26.747 ==> default: -- Base box: spdk/fedora38 00:00:26.747 ==> default: -- Storage pool: default 00:00:26.747 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721075807_80bc73826435a8a1c1d1.img (20G) 00:00:26.747 ==> default: -- Volume Cache: default 00:00:26.747 ==> default: -- Kernel: 00:00:26.747 ==> default: -- Initrd: 00:00:26.747 ==> default: -- Graphics Type: vnc 00:00:26.747 ==> default: -- Graphics Port: -1 00:00:26.747 ==> default: -- Graphics IP: 127.0.0.1 00:00:26.747 ==> default: -- Graphics Password: Not defined 00:00:26.747 ==> default: -- Video Type: cirrus 00:00:26.747 ==> default: -- Video VRAM: 9216 00:00:26.747 ==> default: -- Sound Type: 00:00:26.747 ==> default: -- Keymap: en-us 00:00:26.747 ==> default: -- TPM Path: 00:00:26.747 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:26.747 ==> default: -- Command line args: 00:00:26.747 ==> default: -> value=-device, 00:00:26.747 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:26.747 ==> default: -> value=-drive, 00:00:26.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:26.747 ==> default: -> value=-device, 00:00:26.747 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:26.747 ==> default: -> value=-device, 00:00:26.747 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:26.747 ==> default: -> value=-drive, 00:00:26.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:26.747 ==> default: -> value=-device, 00:00:26.747 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:26.747 ==> default: -> value=-drive, 00:00:26.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:26.747 ==> default: -> value=-device, 00:00:26.747 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:26.747 ==> default: -> value=-drive, 00:00:26.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:26.747 ==> default: -> value=-device, 00:00:26.747 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:27.315 ==> default: Creating shared folders metadata... 00:00:27.315 ==> default: Starting domain. 00:00:29.221 ==> default: Waiting for domain to get an IP address... 00:00:47.300 ==> default: Waiting for SSH to become available... 00:00:47.300 ==> default: Configuring and enabling network interfaces... 00:00:51.483 default: SSH address: 192.168.121.162:22 00:00:51.483 default: SSH username: vagrant 00:00:51.483 default: SSH auth method: private key 00:00:54.016 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:02.187 ==> default: Mounting SSHFS shared folder... 00:01:04.720 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:04.720 ==> default: Checking Mount.. 00:01:06.097 ==> default: Folder Successfully Mounted! 00:01:06.097 ==> default: Running provisioner: file... 00:01:07.035 default: ~/.gitconfig => .gitconfig 00:01:07.603 00:01:07.603 SUCCESS! 00:01:07.603 00:01:07.603 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:07.603 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:07.603 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:07.603 00:01:07.614 [Pipeline] } 00:01:07.635 [Pipeline] // stage 00:01:07.646 [Pipeline] dir 00:01:07.647 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:07.649 [Pipeline] { 00:01:07.666 [Pipeline] catchError 00:01:07.668 [Pipeline] { 00:01:07.687 [Pipeline] sh 00:01:07.971 + vagrant ssh-config --host vagrant 00:01:07.971 + sed -ne /^Host/,$p 00:01:07.971 + tee ssh_conf 00:01:11.254 Host vagrant 00:01:11.254 HostName 192.168.121.162 00:01:11.254 User vagrant 00:01:11.254 Port 22 00:01:11.254 UserKnownHostsFile /dev/null 00:01:11.254 StrictHostKeyChecking no 00:01:11.254 PasswordAuthentication no 00:01:11.254 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:11.254 IdentitiesOnly yes 00:01:11.254 LogLevel FATAL 00:01:11.254 ForwardAgent yes 00:01:11.254 ForwardX11 yes 00:01:11.254 00:01:11.268 [Pipeline] withEnv 00:01:11.270 [Pipeline] { 00:01:11.287 [Pipeline] sh 00:01:11.567 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:11.567 source /etc/os-release 00:01:11.567 [[ -e /image.version ]] && img=$(< /image.version) 00:01:11.567 # Minimal, systemd-like check. 00:01:11.567 if [[ -e /.dockerenv ]]; then 00:01:11.567 # Clear garbage from the node's name: 00:01:11.567 # agt-er_autotest_547-896 -> autotest_547-896 00:01:11.567 # $HOSTNAME is the actual container id 00:01:11.567 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:11.567 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:11.567 # We can assume this is a mount from a host where container is running, 00:01:11.567 # so fetch its hostname to easily identify the target swarm worker. 00:01:11.567 container="$(< /etc/hostname) ($agent)" 00:01:11.567 else 00:01:11.567 # Fallback 00:01:11.567 container=$agent 00:01:11.567 fi 00:01:11.567 fi 00:01:11.567 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:11.567 00:01:11.833 [Pipeline] } 00:01:11.848 [Pipeline] // withEnv 00:01:11.855 [Pipeline] setCustomBuildProperty 00:01:11.866 [Pipeline] stage 00:01:11.868 [Pipeline] { (Tests) 00:01:11.884 [Pipeline] sh 00:01:12.159 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:12.432 [Pipeline] sh 00:01:12.755 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:13.050 [Pipeline] timeout 00:01:13.051 Timeout set to expire in 30 min 00:01:13.052 [Pipeline] { 00:01:13.067 [Pipeline] sh 00:01:13.348 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:13.916 HEAD is now at 20d0fd684 sock: add spdk_sock_get_interface_name 00:01:13.928 [Pipeline] sh 00:01:14.208 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:14.478 [Pipeline] sh 00:01:14.756 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:15.030 [Pipeline] sh 00:01:15.309 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:15.567 ++ readlink -f spdk_repo 00:01:15.567 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:15.567 + [[ -n /home/vagrant/spdk_repo ]] 00:01:15.567 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:15.567 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:15.567 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:15.567 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:15.567 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:15.567 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:15.567 + cd /home/vagrant/spdk_repo 00:01:15.567 + source /etc/os-release 00:01:15.567 ++ NAME='Fedora Linux' 00:01:15.567 ++ VERSION='38 (Cloud Edition)' 00:01:15.567 ++ ID=fedora 00:01:15.567 ++ VERSION_ID=38 00:01:15.567 ++ VERSION_CODENAME= 00:01:15.567 ++ PLATFORM_ID=platform:f38 00:01:15.567 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:15.567 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:15.567 ++ LOGO=fedora-logo-icon 00:01:15.567 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:15.567 ++ HOME_URL=https://fedoraproject.org/ 00:01:15.567 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:15.567 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:15.567 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:15.567 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:15.567 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:15.567 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:15.567 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:15.567 ++ SUPPORT_END=2024-05-14 00:01:15.567 ++ VARIANT='Cloud Edition' 00:01:15.567 ++ VARIANT_ID=cloud 00:01:15.567 + uname -a 00:01:15.567 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:15.567 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:16.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:16.135 Hugepages 00:01:16.135 node hugesize free / total 00:01:16.135 node0 1048576kB 0 / 0 00:01:16.135 node0 2048kB 0 / 0 00:01:16.135 00:01:16.135 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.135 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:16.135 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:16.135 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:16.135 + rm -f /tmp/spdk-ld-path 00:01:16.135 + source autorun-spdk.conf 00:01:16.135 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.135 ++ SPDK_TEST_NVMF=1 00:01:16.135 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.135 ++ SPDK_TEST_URING=1 00:01:16.135 ++ SPDK_TEST_USDT=1 00:01:16.135 ++ SPDK_RUN_UBSAN=1 00:01:16.135 ++ NET_TYPE=virt 00:01:16.135 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.135 ++ RUN_NIGHTLY=0 00:01:16.135 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.135 + [[ -n '' ]] 00:01:16.135 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:16.135 + for M in /var/spdk/build-*-manifest.txt 00:01:16.135 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.135 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.135 + for M in /var/spdk/build-*-manifest.txt 00:01:16.135 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.135 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.135 ++ uname 00:01:16.394 + [[ Linux == \L\i\n\u\x ]] 00:01:16.395 + sudo dmesg -T 00:01:16.395 + sudo dmesg --clear 00:01:16.395 + dmesg_pid=5104 00:01:16.395 + [[ Fedora Linux == FreeBSD ]] 00:01:16.395 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.395 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.395 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.395 + sudo dmesg -Tw 00:01:16.395 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.395 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.395 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.395 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.395 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.395 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.395 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.395 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.395 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.395 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.395 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.395 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:16.395 Test configuration: 00:01:16.395 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.395 SPDK_TEST_NVMF=1 00:01:16.395 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.395 SPDK_TEST_URING=1 00:01:16.395 SPDK_TEST_USDT=1 00:01:16.395 SPDK_RUN_UBSAN=1 00:01:16.395 NET_TYPE=virt 00:01:16.395 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.395 RUN_NIGHTLY=0 20:37:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:16.395 20:37:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.395 20:37:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.395 20:37:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.395 20:37:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.395 20:37:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.395 20:37:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.395 20:37:38 -- paths/export.sh@5 -- $ export PATH 00:01:16.395 20:37:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.395 20:37:38 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:16.395 20:37:38 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:16.395 20:37:38 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721075858.XXXXXX 00:01:16.395 20:37:38 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721075858.kZV0aO 00:01:16.395 20:37:38 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:16.395 20:37:38 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:16.395 20:37:38 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:16.395 20:37:38 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:16.395 20:37:38 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.395 20:37:38 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:16.395 20:37:38 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:16.395 20:37:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.653 20:37:38 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:16.653 20:37:38 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:16.653 20:37:38 -- pm/common@17 -- $ local monitor 00:01:16.653 20:37:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.653 20:37:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.653 20:37:38 -- pm/common@25 -- $ sleep 1 00:01:16.653 20:37:38 -- pm/common@21 -- $ date +%s 00:01:16.653 20:37:38 -- pm/common@21 -- $ date +%s 00:01:16.653 20:37:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721075858 00:01:16.653 20:37:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721075858 00:01:16.653 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721075858_collect-vmstat.pm.log 00:01:16.653 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721075858_collect-cpu-load.pm.log 00:01:17.584 20:37:39 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:17.584 20:37:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.584 20:37:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.584 20:37:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:17.584 20:37:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.584 Mon Jul 15 08:37:39 PM UTC 2024 00:01:17.584 20:37:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.584 v24.09-pre-221-g20d0fd684 00:01:17.584 20:37:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:17.584 20:37:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.584 20:37:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.585 20:37:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:17.585 20:37:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:17.585 20:37:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.585 ************************************ 00:01:17.585 START TEST ubsan 00:01:17.585 ************************************ 00:01:17.585 using ubsan 00:01:17.585 20:37:39 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:17.585 00:01:17.585 real 0m0.000s 00:01:17.585 user 0m0.000s 00:01:17.585 sys 0m0.000s 00:01:17.585 20:37:39 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:17.585 ************************************ 00:01:17.585 END TEST ubsan 00:01:17.585 ************************************ 00:01:17.585 20:37:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.585 20:37:39 -- common/autotest_common.sh@1142 -- $ return 0 00:01:17.585 20:37:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.585 20:37:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.585 20:37:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.585 20:37:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.585 20:37:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.585 20:37:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.585 20:37:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.585 20:37:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:17.585 20:37:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:17.843 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:17.843 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:18.409 Using 'verbs' RDMA provider 00:01:34.216 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:49.124 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:49.694 Creating mk/config.mk...done. 00:01:49.694 Creating mk/cc.flags.mk...done. 00:01:49.694 Type 'make' to build. 00:01:49.694 20:38:11 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:49.694 20:38:11 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:49.694 20:38:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:49.694 20:38:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.694 ************************************ 00:01:49.694 START TEST make 00:01:49.694 ************************************ 00:01:49.694 20:38:11 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:49.951 make[1]: Nothing to be done for 'all'. 00:01:59.927 The Meson build system 00:01:59.927 Version: 1.3.1 00:01:59.927 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:59.927 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:59.927 Build type: native build 00:01:59.927 Program cat found: YES (/usr/bin/cat) 00:01:59.927 Project name: DPDK 00:01:59.927 Project version: 24.03.0 00:01:59.927 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:59.927 C linker for the host machine: cc ld.bfd 2.39-16 00:01:59.927 Host machine cpu family: x86_64 00:01:59.927 Host machine cpu: x86_64 00:01:59.927 Message: ## Building in Developer Mode ## 00:01:59.927 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.927 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:59.927 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.927 Program python3 found: YES (/usr/bin/python3) 00:01:59.927 Program cat found: YES (/usr/bin/cat) 00:01:59.927 Compiler for C supports arguments -march=native: YES 00:01:59.927 Checking for size of "void *" : 8 00:01:59.927 Checking for size of "void *" : 8 (cached) 00:01:59.927 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:59.927 Library m found: YES 00:01:59.927 Library numa found: YES 00:01:59.927 Has header "numaif.h" : YES 00:01:59.927 Library fdt found: NO 00:01:59.927 Library execinfo found: NO 00:01:59.927 Has header "execinfo.h" : YES 00:01:59.927 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.927 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.927 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.927 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.927 Run-time dependency openssl found: YES 3.0.9 00:01:59.927 Run-time dependency libpcap found: YES 1.10.4 00:01:59.927 Has header "pcap.h" with dependency libpcap: YES 00:01:59.927 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.927 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.927 Compiler for C supports arguments -Wformat: YES 00:01:59.927 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.927 Compiler for C supports arguments -Wformat-security: NO 00:01:59.927 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.927 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.927 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.927 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.927 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.927 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.927 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.927 Compiler for C supports arguments -Wundef: YES 00:01:59.927 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.927 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.927 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.927 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.927 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.927 Program objdump found: YES (/usr/bin/objdump) 00:01:59.927 Compiler for C supports arguments -mavx512f: YES 00:01:59.927 Checking if "AVX512 checking" compiles: YES 00:01:59.927 Fetching value of define "__SSE4_2__" : 1 00:01:59.927 Fetching value of define "__AES__" : 1 00:01:59.927 Fetching value of define "__AVX__" : 1 00:01:59.927 Fetching value of define "__AVX2__" : 1 00:01:59.927 Fetching value of define "__AVX512BW__" : 1 00:01:59.927 Fetching value of define "__AVX512CD__" : 1 00:01:59.927 Fetching value of define "__AVX512DQ__" : 1 00:01:59.927 Fetching value of define "__AVX512F__" : 1 00:01:59.927 Fetching value of define "__AVX512VL__" : 1 00:01:59.927 Fetching value of define "__PCLMUL__" : 1 00:01:59.927 Fetching value of define "__RDRND__" : 1 00:01:59.927 Fetching value of define "__RDSEED__" : 1 00:01:59.927 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.927 Fetching value of define "__znver1__" : (undefined) 00:01:59.927 Fetching value of define "__znver2__" : (undefined) 00:01:59.927 Fetching value of define "__znver3__" : (undefined) 00:01:59.927 Fetching value of define "__znver4__" : (undefined) 00:01:59.927 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.927 Message: lib/log: Defining dependency "log" 00:01:59.927 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.927 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.927 Checking for function "getentropy" : NO 00:01:59.927 Message: lib/eal: Defining dependency "eal" 00:01:59.927 Message: lib/ring: Defining dependency "ring" 00:01:59.927 Message: lib/rcu: Defining dependency "rcu" 00:01:59.927 Message: lib/mempool: Defining dependency "mempool" 00:01:59.927 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.927 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.927 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.927 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.927 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.927 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.927 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:59.927 Compiler for C supports arguments -mpclmul: YES 00:01:59.927 Compiler for C supports arguments -maes: YES 00:01:59.927 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.927 Compiler for C supports arguments -mavx512bw: YES 00:01:59.927 Compiler for C supports arguments -mavx512dq: YES 00:01:59.927 Compiler for C supports arguments -mavx512vl: YES 00:01:59.927 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.927 Compiler for C supports arguments -mavx2: YES 00:01:59.927 Compiler for C supports arguments -mavx: YES 00:01:59.927 Message: lib/net: Defining dependency "net" 00:01:59.927 Message: lib/meter: Defining dependency "meter" 00:01:59.927 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.927 Message: lib/pci: Defining dependency "pci" 00:01:59.927 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.927 Message: lib/hash: Defining dependency "hash" 00:01:59.927 Message: lib/timer: Defining dependency "timer" 00:01:59.927 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.927 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.927 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.927 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.927 Message: lib/power: Defining dependency "power" 00:01:59.927 Message: lib/reorder: Defining dependency "reorder" 00:01:59.927 Message: lib/security: Defining dependency "security" 00:01:59.927 Has header "linux/userfaultfd.h" : YES 00:01:59.927 Has header "linux/vduse.h" : YES 00:01:59.927 Message: lib/vhost: Defining dependency "vhost" 00:01:59.927 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.927 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.927 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.927 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.927 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:59.927 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:59.927 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:59.927 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:59.927 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:59.927 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:59.927 Program doxygen found: YES (/usr/bin/doxygen) 00:01:59.927 Configuring doxy-api-html.conf using configuration 00:01:59.927 Configuring doxy-api-man.conf using configuration 00:01:59.927 Program mandb found: YES (/usr/bin/mandb) 00:01:59.927 Program sphinx-build found: NO 00:01:59.927 Configuring rte_build_config.h using configuration 00:01:59.927 Message: 00:01:59.927 ================= 00:01:59.927 Applications Enabled 00:01:59.927 ================= 00:01:59.927 00:01:59.927 apps: 00:01:59.927 00:01:59.927 00:01:59.927 Message: 00:01:59.927 ================= 00:01:59.927 Libraries Enabled 00:01:59.927 ================= 00:01:59.927 00:01:59.927 libs: 00:01:59.927 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:59.927 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:59.927 cryptodev, dmadev, power, reorder, security, vhost, 00:01:59.927 00:01:59.927 Message: 00:01:59.927 =============== 00:01:59.927 Drivers Enabled 00:01:59.927 =============== 00:01:59.927 00:01:59.927 common: 00:01:59.927 00:01:59.927 bus: 00:01:59.927 pci, vdev, 00:01:59.927 mempool: 00:01:59.927 ring, 00:01:59.927 dma: 00:01:59.927 00:01:59.927 net: 00:01:59.927 00:01:59.927 crypto: 00:01:59.927 00:01:59.927 compress: 00:01:59.927 00:01:59.927 vdpa: 00:01:59.927 00:01:59.927 00:01:59.927 Message: 00:01:59.927 ================= 00:01:59.927 Content Skipped 00:01:59.927 ================= 00:01:59.927 00:01:59.927 apps: 00:01:59.927 dumpcap: explicitly disabled via build config 00:01:59.927 graph: explicitly disabled via build config 00:01:59.927 pdump: explicitly disabled via build config 00:01:59.927 proc-info: explicitly disabled via build config 00:01:59.927 test-acl: explicitly disabled via build config 00:01:59.927 test-bbdev: explicitly disabled via build config 00:01:59.927 test-cmdline: explicitly disabled via build config 00:01:59.927 test-compress-perf: explicitly disabled via build config 00:01:59.927 test-crypto-perf: explicitly disabled via build config 00:01:59.927 test-dma-perf: explicitly disabled via build config 00:01:59.927 test-eventdev: explicitly disabled via build config 00:01:59.927 test-fib: explicitly disabled via build config 00:01:59.927 test-flow-perf: explicitly disabled via build config 00:01:59.927 test-gpudev: explicitly disabled via build config 00:01:59.927 test-mldev: explicitly disabled via build config 00:01:59.927 test-pipeline: explicitly disabled via build config 00:01:59.927 test-pmd: explicitly disabled via build config 00:01:59.927 test-regex: explicitly disabled via build config 00:01:59.927 test-sad: explicitly disabled via build config 00:01:59.927 test-security-perf: explicitly disabled via build config 00:01:59.927 00:01:59.927 libs: 00:01:59.927 argparse: explicitly disabled via build config 00:01:59.927 metrics: explicitly disabled via build config 00:01:59.927 acl: explicitly disabled via build config 00:01:59.927 bbdev: explicitly disabled via build config 00:01:59.927 bitratestats: explicitly disabled via build config 00:01:59.928 bpf: explicitly disabled via build config 00:01:59.928 cfgfile: explicitly disabled via build config 00:01:59.928 distributor: explicitly disabled via build config 00:01:59.928 efd: explicitly disabled via build config 00:01:59.928 eventdev: explicitly disabled via build config 00:01:59.928 dispatcher: explicitly disabled via build config 00:01:59.928 gpudev: explicitly disabled via build config 00:01:59.928 gro: explicitly disabled via build config 00:01:59.928 gso: explicitly disabled via build config 00:01:59.928 ip_frag: explicitly disabled via build config 00:01:59.928 jobstats: explicitly disabled via build config 00:01:59.928 latencystats: explicitly disabled via build config 00:01:59.928 lpm: explicitly disabled via build config 00:01:59.928 member: explicitly disabled via build config 00:01:59.928 pcapng: explicitly disabled via build config 00:01:59.928 rawdev: explicitly disabled via build config 00:01:59.928 regexdev: explicitly disabled via build config 00:01:59.928 mldev: explicitly disabled via build config 00:01:59.928 rib: explicitly disabled via build config 00:01:59.928 sched: explicitly disabled via build config 00:01:59.928 stack: explicitly disabled via build config 00:01:59.928 ipsec: explicitly disabled via build config 00:01:59.928 pdcp: explicitly disabled via build config 00:01:59.928 fib: explicitly disabled via build config 00:01:59.928 port: explicitly disabled via build config 00:01:59.928 pdump: explicitly disabled via build config 00:01:59.928 table: explicitly disabled via build config 00:01:59.928 pipeline: explicitly disabled via build config 00:01:59.928 graph: explicitly disabled via build config 00:01:59.928 node: explicitly disabled via build config 00:01:59.928 00:01:59.928 drivers: 00:01:59.928 common/cpt: not in enabled drivers build config 00:01:59.928 common/dpaax: not in enabled drivers build config 00:01:59.928 common/iavf: not in enabled drivers build config 00:01:59.928 common/idpf: not in enabled drivers build config 00:01:59.928 common/ionic: not in enabled drivers build config 00:01:59.928 common/mvep: not in enabled drivers build config 00:01:59.928 common/octeontx: not in enabled drivers build config 00:01:59.928 bus/auxiliary: not in enabled drivers build config 00:01:59.928 bus/cdx: not in enabled drivers build config 00:01:59.928 bus/dpaa: not in enabled drivers build config 00:01:59.928 bus/fslmc: not in enabled drivers build config 00:01:59.928 bus/ifpga: not in enabled drivers build config 00:01:59.928 bus/platform: not in enabled drivers build config 00:01:59.928 bus/uacce: not in enabled drivers build config 00:01:59.928 bus/vmbus: not in enabled drivers build config 00:01:59.928 common/cnxk: not in enabled drivers build config 00:01:59.928 common/mlx5: not in enabled drivers build config 00:01:59.928 common/nfp: not in enabled drivers build config 00:01:59.928 common/nitrox: not in enabled drivers build config 00:01:59.928 common/qat: not in enabled drivers build config 00:01:59.928 common/sfc_efx: not in enabled drivers build config 00:01:59.928 mempool/bucket: not in enabled drivers build config 00:01:59.928 mempool/cnxk: not in enabled drivers build config 00:01:59.928 mempool/dpaa: not in enabled drivers build config 00:01:59.928 mempool/dpaa2: not in enabled drivers build config 00:01:59.928 mempool/octeontx: not in enabled drivers build config 00:01:59.928 mempool/stack: not in enabled drivers build config 00:01:59.928 dma/cnxk: not in enabled drivers build config 00:01:59.928 dma/dpaa: not in enabled drivers build config 00:01:59.928 dma/dpaa2: not in enabled drivers build config 00:01:59.928 dma/hisilicon: not in enabled drivers build config 00:01:59.928 dma/idxd: not in enabled drivers build config 00:01:59.928 dma/ioat: not in enabled drivers build config 00:01:59.928 dma/skeleton: not in enabled drivers build config 00:01:59.928 net/af_packet: not in enabled drivers build config 00:01:59.928 net/af_xdp: not in enabled drivers build config 00:01:59.928 net/ark: not in enabled drivers build config 00:01:59.928 net/atlantic: not in enabled drivers build config 00:01:59.928 net/avp: not in enabled drivers build config 00:01:59.928 net/axgbe: not in enabled drivers build config 00:01:59.928 net/bnx2x: not in enabled drivers build config 00:01:59.928 net/bnxt: not in enabled drivers build config 00:01:59.928 net/bonding: not in enabled drivers build config 00:01:59.928 net/cnxk: not in enabled drivers build config 00:01:59.928 net/cpfl: not in enabled drivers build config 00:01:59.928 net/cxgbe: not in enabled drivers build config 00:01:59.928 net/dpaa: not in enabled drivers build config 00:01:59.928 net/dpaa2: not in enabled drivers build config 00:01:59.928 net/e1000: not in enabled drivers build config 00:01:59.928 net/ena: not in enabled drivers build config 00:01:59.928 net/enetc: not in enabled drivers build config 00:01:59.928 net/enetfec: not in enabled drivers build config 00:01:59.928 net/enic: not in enabled drivers build config 00:01:59.928 net/failsafe: not in enabled drivers build config 00:01:59.928 net/fm10k: not in enabled drivers build config 00:01:59.928 net/gve: not in enabled drivers build config 00:01:59.928 net/hinic: not in enabled drivers build config 00:01:59.928 net/hns3: not in enabled drivers build config 00:01:59.928 net/i40e: not in enabled drivers build config 00:01:59.928 net/iavf: not in enabled drivers build config 00:01:59.928 net/ice: not in enabled drivers build config 00:01:59.928 net/idpf: not in enabled drivers build config 00:01:59.928 net/igc: not in enabled drivers build config 00:01:59.928 net/ionic: not in enabled drivers build config 00:01:59.928 net/ipn3ke: not in enabled drivers build config 00:01:59.928 net/ixgbe: not in enabled drivers build config 00:01:59.928 net/mana: not in enabled drivers build config 00:01:59.928 net/memif: not in enabled drivers build config 00:01:59.928 net/mlx4: not in enabled drivers build config 00:01:59.928 net/mlx5: not in enabled drivers build config 00:01:59.928 net/mvneta: not in enabled drivers build config 00:01:59.928 net/mvpp2: not in enabled drivers build config 00:01:59.928 net/netvsc: not in enabled drivers build config 00:01:59.928 net/nfb: not in enabled drivers build config 00:01:59.928 net/nfp: not in enabled drivers build config 00:01:59.928 net/ngbe: not in enabled drivers build config 00:01:59.928 net/null: not in enabled drivers build config 00:01:59.928 net/octeontx: not in enabled drivers build config 00:01:59.928 net/octeon_ep: not in enabled drivers build config 00:01:59.928 net/pcap: not in enabled drivers build config 00:01:59.928 net/pfe: not in enabled drivers build config 00:01:59.928 net/qede: not in enabled drivers build config 00:01:59.928 net/ring: not in enabled drivers build config 00:01:59.928 net/sfc: not in enabled drivers build config 00:01:59.928 net/softnic: not in enabled drivers build config 00:01:59.928 net/tap: not in enabled drivers build config 00:01:59.928 net/thunderx: not in enabled drivers build config 00:01:59.928 net/txgbe: not in enabled drivers build config 00:01:59.928 net/vdev_netvsc: not in enabled drivers build config 00:01:59.928 net/vhost: not in enabled drivers build config 00:01:59.928 net/virtio: not in enabled drivers build config 00:01:59.928 net/vmxnet3: not in enabled drivers build config 00:01:59.928 raw/*: missing internal dependency, "rawdev" 00:01:59.928 crypto/armv8: not in enabled drivers build config 00:01:59.928 crypto/bcmfs: not in enabled drivers build config 00:01:59.928 crypto/caam_jr: not in enabled drivers build config 00:01:59.928 crypto/ccp: not in enabled drivers build config 00:01:59.928 crypto/cnxk: not in enabled drivers build config 00:01:59.928 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.928 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.928 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.928 crypto/mlx5: not in enabled drivers build config 00:01:59.928 crypto/mvsam: not in enabled drivers build config 00:01:59.928 crypto/nitrox: not in enabled drivers build config 00:01:59.928 crypto/null: not in enabled drivers build config 00:01:59.928 crypto/octeontx: not in enabled drivers build config 00:01:59.928 crypto/openssl: not in enabled drivers build config 00:01:59.928 crypto/scheduler: not in enabled drivers build config 00:01:59.928 crypto/uadk: not in enabled drivers build config 00:01:59.928 crypto/virtio: not in enabled drivers build config 00:01:59.928 compress/isal: not in enabled drivers build config 00:01:59.928 compress/mlx5: not in enabled drivers build config 00:01:59.928 compress/nitrox: not in enabled drivers build config 00:01:59.928 compress/octeontx: not in enabled drivers build config 00:01:59.928 compress/zlib: not in enabled drivers build config 00:01:59.928 regex/*: missing internal dependency, "regexdev" 00:01:59.928 ml/*: missing internal dependency, "mldev" 00:01:59.928 vdpa/ifc: not in enabled drivers build config 00:01:59.928 vdpa/mlx5: not in enabled drivers build config 00:01:59.928 vdpa/nfp: not in enabled drivers build config 00:01:59.928 vdpa/sfc: not in enabled drivers build config 00:01:59.928 event/*: missing internal dependency, "eventdev" 00:01:59.928 baseband/*: missing internal dependency, "bbdev" 00:01:59.928 gpu/*: missing internal dependency, "gpudev" 00:01:59.928 00:01:59.928 00:01:59.928 Build targets in project: 85 00:01:59.928 00:01:59.928 DPDK 24.03.0 00:01:59.928 00:01:59.928 User defined options 00:01:59.928 buildtype : debug 00:01:59.928 default_library : shared 00:01:59.928 libdir : lib 00:01:59.928 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:59.928 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:59.928 c_link_args : 00:01:59.928 cpu_instruction_set: native 00:01:59.928 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:59.928 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:59.928 enable_docs : false 00:01:59.928 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.928 enable_kmods : false 00:01:59.928 max_lcores : 128 00:01:59.928 tests : false 00:01:59.928 00:01:59.928 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.187 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:00.477 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.477 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.477 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.477 [4/268] Linking static target lib/librte_kvargs.a 00:02:00.477 [5/268] Linking static target lib/librte_log.a 00:02:00.477 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.757 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.757 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.757 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.015 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.015 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.015 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.015 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.015 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:01.015 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.015 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.015 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.015 [18/268] Linking static target lib/librte_telemetry.a 00:02:01.273 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.273 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.273 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.273 [22/268] Linking target lib/librte_log.so.24.1 00:02:01.273 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.532 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.532 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.532 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.532 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.532 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.532 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.532 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:01.532 [31/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.790 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.790 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:02.048 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.048 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.048 [36/268] Linking target lib/librte_telemetry.so.24.1 00:02:02.048 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.048 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.048 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.048 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.048 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.048 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.048 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.048 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.048 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.048 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.351 [47/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:02.351 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.609 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.609 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.609 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.609 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.609 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.609 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.867 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.867 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.867 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.867 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:02.867 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.867 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.124 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.124 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.124 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.124 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.124 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.124 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.381 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.381 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:03.639 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:03.639 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:03.639 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:03.639 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:03.639 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:03.639 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:03.639 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:03.639 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:03.897 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:03.897 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:03.897 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.897 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:03.898 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:04.155 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:04.155 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:04.155 [84/268] Linking static target lib/librte_ring.a 00:02:04.413 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:04.413 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:04.413 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:04.413 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:04.413 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:04.413 [90/268] Linking static target lib/librte_rcu.a 00:02:04.413 [91/268] Linking static target lib/librte_eal.a 00:02:04.414 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:04.414 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:04.672 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:04.672 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:04.672 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.672 [97/268] Linking static target lib/librte_mempool.a 00:02:04.672 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.672 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:04.672 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:04.942 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:04.942 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.942 [103/268] Linking static target lib/librte_mbuf.a 00:02:04.942 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:04.942 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:04.942 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:05.200 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:05.200 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:05.200 [109/268] Linking static target lib/librte_meter.a 00:02:05.200 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:05.200 [111/268] Linking static target lib/librte_net.a 00:02:05.457 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:05.457 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:05.457 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:05.457 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.457 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:05.715 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.972 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.972 [119/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.972 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.972 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:05.972 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:06.230 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:06.488 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:06.488 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:06.488 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.488 [127/268] Linking static target lib/librte_pci.a 00:02:06.488 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:06.488 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.488 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.488 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:06.488 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:06.488 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.747 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.747 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.747 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.747 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.747 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.747 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:06.747 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.747 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.747 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.747 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:06.747 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.747 [145/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.005 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.005 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.006 [148/268] Linking static target lib/librte_cmdline.a 00:02:07.006 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.264 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.264 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:07.264 [152/268] Linking static target lib/librte_timer.a 00:02:07.264 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:07.264 [154/268] Linking static target lib/librte_ethdev.a 00:02:07.524 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.524 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.524 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.524 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.524 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.524 [160/268] Linking static target lib/librte_compressdev.a 00:02:07.524 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.524 [162/268] Linking static target lib/librte_hash.a 00:02:07.782 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.782 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:08.041 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:08.041 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:08.041 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:08.041 [168/268] Linking static target lib/librte_dmadev.a 00:02:08.041 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:08.299 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:08.299 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:08.299 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:08.299 [173/268] Linking static target lib/librte_cryptodev.a 00:02:08.299 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:08.299 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:08.557 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.557 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.557 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:08.816 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:08.816 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:08.816 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.816 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.816 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:08.816 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:09.074 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:09.074 [186/268] Linking static target lib/librte_power.a 00:02:09.074 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.074 [188/268] Linking static target lib/librte_reorder.a 00:02:09.333 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.333 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:09.333 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:09.333 [192/268] Linking static target lib/librte_security.a 00:02:09.333 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:09.592 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:09.592 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.852 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.852 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:09.852 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:09.852 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.110 [200/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.110 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:10.110 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:10.369 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:10.369 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:10.369 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:10.369 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.369 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:10.628 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:10.628 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:10.628 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:10.628 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:10.628 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.628 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:10.628 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:10.628 [215/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:10.628 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:10.628 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:10.628 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:10.628 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:10.628 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:10.628 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:10.887 [222/268] Linking static target drivers/librte_bus_pci.a 00:02:10.887 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:10.887 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.887 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.887 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:10.887 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.455 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.714 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:11.714 [230/268] Linking static target lib/librte_vhost.a 00:02:14.348 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.951 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.951 [233/268] Linking target lib/librte_eal.so.24.1 00:02:16.951 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:16.951 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:16.951 [236/268] Linking target lib/librte_ring.so.24.1 00:02:16.951 [237/268] Linking target lib/librte_meter.so.24.1 00:02:16.951 [238/268] Linking target lib/librte_pci.so.24.1 00:02:16.951 [239/268] Linking target lib/librte_timer.so.24.1 00:02:16.951 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:16.951 [241/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.209 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:17.209 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:17.209 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:17.209 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:17.209 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:17.209 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:17.209 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:17.209 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:17.209 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:17.209 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:17.468 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:17.468 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:17.468 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:17.468 [255/268] Linking target lib/librte_net.so.24.1 00:02:17.468 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:17.468 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:17.468 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:17.727 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:17.727 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:17.727 [261/268] Linking target lib/librte_hash.so.24.1 00:02:17.727 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:17.727 [263/268] Linking target lib/librte_security.so.24.1 00:02:17.727 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:18.053 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:18.053 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:18.053 [267/268] Linking target lib/librte_power.so.24.1 00:02:18.053 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:18.053 INFO: autodetecting backend as ninja 00:02:18.053 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:19.426 CC lib/log/log_deprecated.o 00:02:19.426 CC lib/log/log.o 00:02:19.426 CC lib/ut_mock/mock.o 00:02:19.426 CC lib/log/log_flags.o 00:02:19.426 CC lib/ut/ut.o 00:02:19.426 LIB libspdk_ut_mock.a 00:02:19.426 LIB libspdk_log.a 00:02:19.426 LIB libspdk_ut.a 00:02:19.426 SO libspdk_ut_mock.so.6.0 00:02:19.426 SO libspdk_ut.so.2.0 00:02:19.426 SO libspdk_log.so.7.0 00:02:19.426 SYMLINK libspdk_ut_mock.so 00:02:19.426 SYMLINK libspdk_ut.so 00:02:19.703 SYMLINK libspdk_log.so 00:02:19.961 CC lib/dma/dma.o 00:02:19.961 CC lib/ioat/ioat.o 00:02:19.961 CC lib/util/base64.o 00:02:19.961 CC lib/util/bit_array.o 00:02:19.961 CC lib/util/crc32.o 00:02:19.961 CC lib/util/crc16.o 00:02:19.961 CC lib/util/cpuset.o 00:02:19.961 CC lib/util/crc32c.o 00:02:19.961 CXX lib/trace_parser/trace.o 00:02:19.961 CC lib/vfio_user/host/vfio_user_pci.o 00:02:19.961 CC lib/util/crc32_ieee.o 00:02:19.961 CC lib/util/crc64.o 00:02:19.961 CC lib/util/dif.o 00:02:19.961 CC lib/util/fd.o 00:02:19.961 LIB libspdk_dma.a 00:02:19.961 CC lib/util/fd_group.o 00:02:19.961 CC lib/util/file.o 00:02:20.220 SO libspdk_dma.so.4.0 00:02:20.220 LIB libspdk_ioat.a 00:02:20.220 CC lib/vfio_user/host/vfio_user.o 00:02:20.220 SO libspdk_ioat.so.7.0 00:02:20.220 SYMLINK libspdk_dma.so 00:02:20.220 CC lib/util/hexlify.o 00:02:20.220 CC lib/util/iov.o 00:02:20.220 CC lib/util/math.o 00:02:20.220 SYMLINK libspdk_ioat.so 00:02:20.220 CC lib/util/net.o 00:02:20.220 CC lib/util/pipe.o 00:02:20.220 CC lib/util/strerror_tls.o 00:02:20.220 CC lib/util/string.o 00:02:20.220 CC lib/util/uuid.o 00:02:20.220 CC lib/util/xor.o 00:02:20.220 LIB libspdk_vfio_user.a 00:02:20.220 CC lib/util/zipf.o 00:02:20.477 SO libspdk_vfio_user.so.5.0 00:02:20.477 SYMLINK libspdk_vfio_user.so 00:02:20.477 LIB libspdk_util.a 00:02:20.736 SO libspdk_util.so.9.1 00:02:20.736 LIB libspdk_trace_parser.a 00:02:20.736 SO libspdk_trace_parser.so.5.0 00:02:20.736 SYMLINK libspdk_util.so 00:02:20.994 SYMLINK libspdk_trace_parser.so 00:02:20.994 CC lib/vmd/led.o 00:02:20.994 CC lib/vmd/vmd.o 00:02:20.994 CC lib/rdma_provider/common.o 00:02:20.994 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:20.994 CC lib/conf/conf.o 00:02:20.994 CC lib/idxd/idxd.o 00:02:20.994 CC lib/env_dpdk/env.o 00:02:20.994 CC lib/idxd/idxd_user.o 00:02:20.994 CC lib/json/json_parse.o 00:02:20.994 CC lib/rdma_utils/rdma_utils.o 00:02:21.251 CC lib/json/json_util.o 00:02:21.251 CC lib/json/json_write.o 00:02:21.251 LIB libspdk_rdma_provider.a 00:02:21.251 LIB libspdk_conf.a 00:02:21.251 SO libspdk_rdma_provider.so.6.0 00:02:21.251 CC lib/idxd/idxd_kernel.o 00:02:21.251 CC lib/env_dpdk/memory.o 00:02:21.251 SO libspdk_conf.so.6.0 00:02:21.251 LIB libspdk_rdma_utils.a 00:02:21.251 SYMLINK libspdk_conf.so 00:02:21.251 SO libspdk_rdma_utils.so.1.0 00:02:21.251 SYMLINK libspdk_rdma_provider.so 00:02:21.251 CC lib/env_dpdk/pci.o 00:02:21.251 CC lib/env_dpdk/init.o 00:02:21.251 SYMLINK libspdk_rdma_utils.so 00:02:21.251 CC lib/env_dpdk/threads.o 00:02:21.251 CC lib/env_dpdk/pci_ioat.o 00:02:21.251 CC lib/env_dpdk/pci_virtio.o 00:02:21.509 LIB libspdk_json.a 00:02:21.509 SO libspdk_json.so.6.0 00:02:21.509 LIB libspdk_idxd.a 00:02:21.509 CC lib/env_dpdk/pci_vmd.o 00:02:21.509 SO libspdk_idxd.so.12.0 00:02:21.509 CC lib/env_dpdk/pci_idxd.o 00:02:21.509 LIB libspdk_vmd.a 00:02:21.509 SYMLINK libspdk_json.so 00:02:21.509 CC lib/env_dpdk/pci_event.o 00:02:21.509 SYMLINK libspdk_idxd.so 00:02:21.509 SO libspdk_vmd.so.6.0 00:02:21.509 CC lib/env_dpdk/sigbus_handler.o 00:02:21.509 CC lib/env_dpdk/pci_dpdk.o 00:02:21.509 SYMLINK libspdk_vmd.so 00:02:21.509 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:21.509 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:21.767 CC lib/jsonrpc/jsonrpc_server.o 00:02:21.767 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:21.767 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:21.767 CC lib/jsonrpc/jsonrpc_client.o 00:02:22.024 LIB libspdk_jsonrpc.a 00:02:22.024 SO libspdk_jsonrpc.so.6.0 00:02:22.024 SYMLINK libspdk_jsonrpc.so 00:02:22.282 LIB libspdk_env_dpdk.a 00:02:22.282 SO libspdk_env_dpdk.so.14.1 00:02:22.545 SYMLINK libspdk_env_dpdk.so 00:02:22.545 CC lib/rpc/rpc.o 00:02:22.803 LIB libspdk_rpc.a 00:02:22.803 SO libspdk_rpc.so.6.0 00:02:22.803 SYMLINK libspdk_rpc.so 00:02:23.062 CC lib/trace/trace.o 00:02:23.062 CC lib/trace/trace_rpc.o 00:02:23.062 CC lib/trace/trace_flags.o 00:02:23.062 CC lib/notify/notify.o 00:02:23.062 CC lib/notify/notify_rpc.o 00:02:23.321 CC lib/keyring/keyring.o 00:02:23.321 CC lib/keyring/keyring_rpc.o 00:02:23.321 LIB libspdk_notify.a 00:02:23.321 SO libspdk_notify.so.6.0 00:02:23.321 LIB libspdk_trace.a 00:02:23.321 LIB libspdk_keyring.a 00:02:23.580 SYMLINK libspdk_notify.so 00:02:23.580 SO libspdk_trace.so.10.0 00:02:23.580 SO libspdk_keyring.so.1.0 00:02:23.580 SYMLINK libspdk_trace.so 00:02:23.580 SYMLINK libspdk_keyring.so 00:02:23.838 CC lib/sock/sock.o 00:02:23.839 CC lib/sock/sock_rpc.o 00:02:23.839 CC lib/thread/thread.o 00:02:23.839 CC lib/thread/iobuf.o 00:02:24.405 LIB libspdk_sock.a 00:02:24.405 SO libspdk_sock.so.10.0 00:02:24.405 SYMLINK libspdk_sock.so 00:02:24.972 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:24.972 CC lib/nvme/nvme_ctrlr.o 00:02:24.972 CC lib/nvme/nvme_fabric.o 00:02:24.972 CC lib/nvme/nvme_ns_cmd.o 00:02:24.972 CC lib/nvme/nvme_ns.o 00:02:24.972 CC lib/nvme/nvme_pcie_common.o 00:02:24.972 CC lib/nvme/nvme_pcie.o 00:02:24.972 CC lib/nvme/nvme_qpair.o 00:02:24.972 CC lib/nvme/nvme.o 00:02:25.231 LIB libspdk_thread.a 00:02:25.231 SO libspdk_thread.so.10.1 00:02:25.488 SYMLINK libspdk_thread.so 00:02:25.488 CC lib/nvme/nvme_quirks.o 00:02:25.488 CC lib/nvme/nvme_transport.o 00:02:25.488 CC lib/nvme/nvme_discovery.o 00:02:25.488 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:25.488 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:25.488 CC lib/nvme/nvme_tcp.o 00:02:25.488 CC lib/nvme/nvme_opal.o 00:02:25.745 CC lib/nvme/nvme_io_msg.o 00:02:25.745 CC lib/nvme/nvme_poll_group.o 00:02:26.003 CC lib/nvme/nvme_zns.o 00:02:26.003 CC lib/nvme/nvme_stubs.o 00:02:26.003 CC lib/nvme/nvme_auth.o 00:02:26.003 CC lib/nvme/nvme_cuse.o 00:02:26.003 CC lib/nvme/nvme_rdma.o 00:02:26.262 CC lib/accel/accel.o 00:02:26.533 CC lib/blob/blobstore.o 00:02:26.533 CC lib/accel/accel_rpc.o 00:02:26.533 CC lib/init/json_config.o 00:02:26.533 CC lib/virtio/virtio.o 00:02:26.533 CC lib/virtio/virtio_vhost_user.o 00:02:26.791 CC lib/virtio/virtio_vfio_user.o 00:02:26.791 CC lib/init/subsystem.o 00:02:26.791 CC lib/blob/request.o 00:02:26.791 CC lib/blob/zeroes.o 00:02:26.791 CC lib/blob/blob_bs_dev.o 00:02:26.791 CC lib/virtio/virtio_pci.o 00:02:26.791 CC lib/accel/accel_sw.o 00:02:27.050 CC lib/init/subsystem_rpc.o 00:02:27.050 CC lib/init/rpc.o 00:02:27.050 LIB libspdk_init.a 00:02:27.050 LIB libspdk_virtio.a 00:02:27.317 LIB libspdk_accel.a 00:02:27.317 SO libspdk_init.so.5.0 00:02:27.317 SO libspdk_virtio.so.7.0 00:02:27.317 SO libspdk_accel.so.15.1 00:02:27.317 LIB libspdk_nvme.a 00:02:27.317 SYMLINK libspdk_init.so 00:02:27.317 SYMLINK libspdk_virtio.so 00:02:27.317 SYMLINK libspdk_accel.so 00:02:27.577 SO libspdk_nvme.so.13.1 00:02:27.577 CC lib/event/app.o 00:02:27.577 CC lib/event/reactor.o 00:02:27.577 CC lib/event/app_rpc.o 00:02:27.577 CC lib/event/log_rpc.o 00:02:27.577 CC lib/event/scheduler_static.o 00:02:27.577 CC lib/bdev/bdev.o 00:02:27.577 CC lib/bdev/bdev_rpc.o 00:02:27.577 CC lib/bdev/bdev_zone.o 00:02:27.835 CC lib/bdev/part.o 00:02:27.835 SYMLINK libspdk_nvme.so 00:02:27.835 CC lib/bdev/scsi_nvme.o 00:02:28.093 LIB libspdk_event.a 00:02:28.093 SO libspdk_event.so.14.0 00:02:28.093 SYMLINK libspdk_event.so 00:02:29.028 LIB libspdk_blob.a 00:02:29.286 SO libspdk_blob.so.11.0 00:02:29.286 SYMLINK libspdk_blob.so 00:02:29.853 CC lib/blobfs/blobfs.o 00:02:29.853 CC lib/blobfs/tree.o 00:02:29.853 CC lib/lvol/lvol.o 00:02:29.853 LIB libspdk_bdev.a 00:02:29.853 SO libspdk_bdev.so.15.1 00:02:30.111 SYMLINK libspdk_bdev.so 00:02:30.369 CC lib/nbd/nbd.o 00:02:30.369 CC lib/nbd/nbd_rpc.o 00:02:30.369 CC lib/scsi/dev.o 00:02:30.369 CC lib/ftl/ftl_core.o 00:02:30.369 CC lib/scsi/lun.o 00:02:30.369 CC lib/ftl/ftl_init.o 00:02:30.369 CC lib/nvmf/ctrlr.o 00:02:30.369 CC lib/ublk/ublk.o 00:02:30.369 LIB libspdk_blobfs.a 00:02:30.369 SO libspdk_blobfs.so.10.0 00:02:30.369 CC lib/nvmf/ctrlr_discovery.o 00:02:30.369 LIB libspdk_lvol.a 00:02:30.369 CC lib/ftl/ftl_layout.o 00:02:30.369 SYMLINK libspdk_blobfs.so 00:02:30.369 CC lib/ftl/ftl_debug.o 00:02:30.627 SO libspdk_lvol.so.10.0 00:02:30.627 CC lib/ftl/ftl_io.o 00:02:30.627 SYMLINK libspdk_lvol.so 00:02:30.627 CC lib/ftl/ftl_sb.o 00:02:30.627 CC lib/scsi/port.o 00:02:30.627 CC lib/scsi/scsi.o 00:02:30.627 LIB libspdk_nbd.a 00:02:30.627 SO libspdk_nbd.so.7.0 00:02:30.627 CC lib/scsi/scsi_bdev.o 00:02:30.627 CC lib/ftl/ftl_l2p.o 00:02:30.627 CC lib/scsi/scsi_pr.o 00:02:30.627 SYMLINK libspdk_nbd.so 00:02:30.886 CC lib/scsi/scsi_rpc.o 00:02:30.886 CC lib/ftl/ftl_l2p_flat.o 00:02:30.886 CC lib/scsi/task.o 00:02:30.886 CC lib/ftl/ftl_nv_cache.o 00:02:30.886 CC lib/ublk/ublk_rpc.o 00:02:30.886 CC lib/ftl/ftl_band.o 00:02:30.886 CC lib/ftl/ftl_band_ops.o 00:02:30.886 CC lib/nvmf/ctrlr_bdev.o 00:02:30.886 CC lib/nvmf/subsystem.o 00:02:30.886 CC lib/ftl/ftl_writer.o 00:02:30.886 LIB libspdk_ublk.a 00:02:31.144 CC lib/ftl/ftl_rq.o 00:02:31.144 SO libspdk_ublk.so.3.0 00:02:31.144 LIB libspdk_scsi.a 00:02:31.144 SYMLINK libspdk_ublk.so 00:02:31.144 CC lib/nvmf/nvmf.o 00:02:31.144 SO libspdk_scsi.so.9.0 00:02:31.144 CC lib/nvmf/nvmf_rpc.o 00:02:31.144 CC lib/ftl/ftl_reloc.o 00:02:31.144 CC lib/nvmf/transport.o 00:02:31.144 CC lib/nvmf/tcp.o 00:02:31.144 SYMLINK libspdk_scsi.so 00:02:31.144 CC lib/ftl/ftl_l2p_cache.o 00:02:31.402 CC lib/nvmf/stubs.o 00:02:31.402 CC lib/ftl/ftl_p2l.o 00:02:31.658 CC lib/nvmf/mdns_server.o 00:02:31.658 CC lib/nvmf/rdma.o 00:02:31.658 CC lib/ftl/mngt/ftl_mngt.o 00:02:31.915 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:31.915 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:31.915 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:31.915 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:31.915 CC lib/nvmf/auth.o 00:02:31.915 CC lib/iscsi/conn.o 00:02:31.915 CC lib/vhost/vhost.o 00:02:31.915 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:32.174 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:32.174 CC lib/iscsi/init_grp.o 00:02:32.174 CC lib/vhost/vhost_rpc.o 00:02:32.174 CC lib/iscsi/iscsi.o 00:02:32.175 CC lib/vhost/vhost_scsi.o 00:02:32.175 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:32.499 CC lib/iscsi/md5.o 00:02:32.499 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:32.499 CC lib/vhost/vhost_blk.o 00:02:32.499 CC lib/vhost/rte_vhost_user.o 00:02:32.499 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:32.757 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:32.757 CC lib/iscsi/param.o 00:02:32.757 CC lib/iscsi/portal_grp.o 00:02:32.757 CC lib/iscsi/tgt_node.o 00:02:32.757 CC lib/iscsi/iscsi_subsystem.o 00:02:32.757 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:33.016 CC lib/iscsi/iscsi_rpc.o 00:02:33.016 CC lib/iscsi/task.o 00:02:33.016 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:33.016 CC lib/ftl/utils/ftl_conf.o 00:02:33.016 CC lib/ftl/utils/ftl_md.o 00:02:33.274 CC lib/ftl/utils/ftl_mempool.o 00:02:33.274 CC lib/ftl/utils/ftl_bitmap.o 00:02:33.274 CC lib/ftl/utils/ftl_property.o 00:02:33.274 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:33.274 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:33.274 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:33.274 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:33.274 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:33.274 LIB libspdk_iscsi.a 00:02:33.274 LIB libspdk_vhost.a 00:02:33.533 SO libspdk_vhost.so.8.0 00:02:33.533 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:33.533 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:33.533 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:33.533 SO libspdk_iscsi.so.8.0 00:02:33.533 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:33.533 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:33.533 LIB libspdk_nvmf.a 00:02:33.533 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:33.533 SYMLINK libspdk_vhost.so 00:02:33.533 CC lib/ftl/base/ftl_base_dev.o 00:02:33.533 CC lib/ftl/base/ftl_base_bdev.o 00:02:33.533 SO libspdk_nvmf.so.19.0 00:02:33.791 SYMLINK libspdk_iscsi.so 00:02:33.791 CC lib/ftl/ftl_trace.o 00:02:33.791 SYMLINK libspdk_nvmf.so 00:02:33.791 LIB libspdk_ftl.a 00:02:34.049 SO libspdk_ftl.so.9.0 00:02:34.617 SYMLINK libspdk_ftl.so 00:02:34.875 CC module/env_dpdk/env_dpdk_rpc.o 00:02:34.875 CC module/accel/ioat/accel_ioat.o 00:02:34.875 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:34.875 CC module/keyring/linux/keyring.o 00:02:34.875 CC module/sock/posix/posix.o 00:02:34.875 CC module/accel/dsa/accel_dsa.o 00:02:34.875 CC module/accel/error/accel_error.o 00:02:34.875 CC module/accel/iaa/accel_iaa.o 00:02:34.875 CC module/blob/bdev/blob_bdev.o 00:02:34.875 CC module/keyring/file/keyring.o 00:02:35.134 LIB libspdk_env_dpdk_rpc.a 00:02:35.134 SO libspdk_env_dpdk_rpc.so.6.0 00:02:35.134 SYMLINK libspdk_env_dpdk_rpc.so 00:02:35.134 CC module/accel/error/accel_error_rpc.o 00:02:35.134 CC module/keyring/linux/keyring_rpc.o 00:02:35.134 CC module/keyring/file/keyring_rpc.o 00:02:35.134 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.134 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.134 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.134 LIB libspdk_scheduler_dynamic.a 00:02:35.134 SO libspdk_scheduler_dynamic.so.4.0 00:02:35.134 LIB libspdk_blob_bdev.a 00:02:35.134 LIB libspdk_keyring_linux.a 00:02:35.134 LIB libspdk_accel_error.a 00:02:35.134 LIB libspdk_keyring_file.a 00:02:35.134 SO libspdk_blob_bdev.so.11.0 00:02:35.481 SYMLINK libspdk_scheduler_dynamic.so 00:02:35.481 SO libspdk_keyring_linux.so.1.0 00:02:35.481 LIB libspdk_accel_ioat.a 00:02:35.481 SO libspdk_accel_error.so.2.0 00:02:35.481 LIB libspdk_accel_iaa.a 00:02:35.481 LIB libspdk_accel_dsa.a 00:02:35.481 SO libspdk_keyring_file.so.1.0 00:02:35.481 SO libspdk_accel_ioat.so.6.0 00:02:35.481 SO libspdk_accel_dsa.so.5.0 00:02:35.481 SYMLINK libspdk_blob_bdev.so 00:02:35.481 SO libspdk_accel_iaa.so.3.0 00:02:35.481 SYMLINK libspdk_keyring_linux.so 00:02:35.481 SYMLINK libspdk_accel_error.so 00:02:35.481 SYMLINK libspdk_keyring_file.so 00:02:35.481 SYMLINK libspdk_accel_ioat.so 00:02:35.481 SYMLINK libspdk_accel_dsa.so 00:02:35.481 SYMLINK libspdk_accel_iaa.so 00:02:35.481 CC module/sock/uring/uring.o 00:02:35.481 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.481 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.481 LIB libspdk_sock_posix.a 00:02:35.739 LIB libspdk_scheduler_dpdk_governor.a 00:02:35.739 SO libspdk_sock_posix.so.6.0 00:02:35.739 CC module/bdev/delay/vbdev_delay.o 00:02:35.739 CC module/bdev/gpt/gpt.o 00:02:35.739 CC module/bdev/error/vbdev_error.o 00:02:35.739 CC module/bdev/lvol/vbdev_lvol.o 00:02:35.739 CC module/blobfs/bdev/blobfs_bdev.o 00:02:35.739 CC module/bdev/malloc/bdev_malloc.o 00:02:35.739 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:35.739 LIB libspdk_scheduler_gscheduler.a 00:02:35.739 SYMLINK libspdk_sock_posix.so 00:02:35.739 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:35.739 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:35.739 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:35.739 SO libspdk_scheduler_gscheduler.so.4.0 00:02:35.739 SYMLINK libspdk_scheduler_gscheduler.so 00:02:35.740 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:35.998 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:35.998 CC module/bdev/error/vbdev_error_rpc.o 00:02:35.998 CC module/bdev/gpt/vbdev_gpt.o 00:02:35.998 LIB libspdk_bdev_delay.a 00:02:35.998 LIB libspdk_sock_uring.a 00:02:35.998 LIB libspdk_bdev_malloc.a 00:02:35.998 SO libspdk_bdev_delay.so.6.0 00:02:35.998 SO libspdk_sock_uring.so.5.0 00:02:35.998 SO libspdk_bdev_malloc.so.6.0 00:02:35.998 LIB libspdk_blobfs_bdev.a 00:02:35.998 CC module/bdev/null/bdev_null.o 00:02:35.998 LIB libspdk_bdev_error.a 00:02:35.998 SO libspdk_blobfs_bdev.so.6.0 00:02:35.998 SYMLINK libspdk_bdev_delay.so 00:02:35.998 SYMLINK libspdk_sock_uring.so 00:02:35.998 SO libspdk_bdev_error.so.6.0 00:02:35.998 SYMLINK libspdk_bdev_malloc.so 00:02:35.998 CC module/bdev/null/bdev_null_rpc.o 00:02:35.998 SYMLINK libspdk_blobfs_bdev.so 00:02:35.998 LIB libspdk_bdev_gpt.a 00:02:35.998 SYMLINK libspdk_bdev_error.so 00:02:35.998 CC module/bdev/nvme/bdev_nvme.o 00:02:35.998 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:36.257 SO libspdk_bdev_gpt.so.6.0 00:02:36.257 LIB libspdk_bdev_lvol.a 00:02:36.257 SYMLINK libspdk_bdev_gpt.so 00:02:36.257 CC module/bdev/raid/bdev_raid.o 00:02:36.257 SO libspdk_bdev_lvol.so.6.0 00:02:36.257 CC module/bdev/passthru/vbdev_passthru.o 00:02:36.257 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:36.257 CC module/bdev/split/vbdev_split.o 00:02:36.257 LIB libspdk_bdev_null.a 00:02:36.257 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:36.257 SO libspdk_bdev_null.so.6.0 00:02:36.257 SYMLINK libspdk_bdev_lvol.so 00:02:36.257 CC module/bdev/raid/bdev_raid_rpc.o 00:02:36.257 SYMLINK libspdk_bdev_null.so 00:02:36.257 CC module/bdev/nvme/nvme_rpc.o 00:02:36.515 CC module/bdev/uring/bdev_uring.o 00:02:36.515 CC module/bdev/nvme/bdev_mdns_client.o 00:02:36.515 CC module/bdev/split/vbdev_split_rpc.o 00:02:36.515 LIB libspdk_bdev_passthru.a 00:02:36.515 CC module/bdev/raid/bdev_raid_sb.o 00:02:36.515 SO libspdk_bdev_passthru.so.6.0 00:02:36.515 CC module/bdev/raid/raid0.o 00:02:36.515 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:36.515 CC module/bdev/raid/raid1.o 00:02:36.515 SYMLINK libspdk_bdev_passthru.so 00:02:36.515 LIB libspdk_bdev_split.a 00:02:36.772 SO libspdk_bdev_split.so.6.0 00:02:36.772 CC module/bdev/uring/bdev_uring_rpc.o 00:02:36.772 LIB libspdk_bdev_zone_block.a 00:02:36.772 SYMLINK libspdk_bdev_split.so 00:02:36.772 CC module/bdev/raid/concat.o 00:02:36.772 CC module/bdev/aio/bdev_aio.o 00:02:36.772 SO libspdk_bdev_zone_block.so.6.0 00:02:36.772 CC module/bdev/ftl/bdev_ftl.o 00:02:36.772 SYMLINK libspdk_bdev_zone_block.so 00:02:36.772 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:36.772 CC module/bdev/aio/bdev_aio_rpc.o 00:02:36.772 LIB libspdk_bdev_uring.a 00:02:36.772 SO libspdk_bdev_uring.so.6.0 00:02:37.030 CC module/bdev/iscsi/bdev_iscsi.o 00:02:37.030 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:37.030 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:37.030 SYMLINK libspdk_bdev_uring.so 00:02:37.030 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:37.030 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:37.030 CC module/bdev/nvme/vbdev_opal.o 00:02:37.030 LIB libspdk_bdev_raid.a 00:02:37.030 LIB libspdk_bdev_aio.a 00:02:37.030 LIB libspdk_bdev_ftl.a 00:02:37.030 SO libspdk_bdev_aio.so.6.0 00:02:37.030 SO libspdk_bdev_ftl.so.6.0 00:02:37.030 SO libspdk_bdev_raid.so.6.0 00:02:37.030 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:37.030 SYMLINK libspdk_bdev_aio.so 00:02:37.030 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:37.030 SYMLINK libspdk_bdev_ftl.so 00:02:37.288 SYMLINK libspdk_bdev_raid.so 00:02:37.288 LIB libspdk_bdev_iscsi.a 00:02:37.288 SO libspdk_bdev_iscsi.so.6.0 00:02:37.288 SYMLINK libspdk_bdev_iscsi.so 00:02:37.288 LIB libspdk_bdev_virtio.a 00:02:37.547 SO libspdk_bdev_virtio.so.6.0 00:02:37.547 SYMLINK libspdk_bdev_virtio.so 00:02:38.114 LIB libspdk_bdev_nvme.a 00:02:38.114 SO libspdk_bdev_nvme.so.7.0 00:02:38.114 SYMLINK libspdk_bdev_nvme.so 00:02:38.681 CC module/event/subsystems/iobuf/iobuf.o 00:02:38.681 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:38.681 CC module/event/subsystems/vmd/vmd.o 00:02:38.681 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:38.681 CC module/event/subsystems/scheduler/scheduler.o 00:02:38.681 CC module/event/subsystems/keyring/keyring.o 00:02:38.681 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:38.681 CC module/event/subsystems/sock/sock.o 00:02:38.939 LIB libspdk_event_keyring.a 00:02:38.939 LIB libspdk_event_scheduler.a 00:02:38.939 LIB libspdk_event_vhost_blk.a 00:02:38.939 LIB libspdk_event_vmd.a 00:02:38.939 LIB libspdk_event_sock.a 00:02:38.939 SO libspdk_event_keyring.so.1.0 00:02:38.939 SO libspdk_event_scheduler.so.4.0 00:02:38.939 LIB libspdk_event_iobuf.a 00:02:38.939 SO libspdk_event_vhost_blk.so.3.0 00:02:38.939 SO libspdk_event_vmd.so.6.0 00:02:38.940 SO libspdk_event_sock.so.5.0 00:02:38.940 SO libspdk_event_iobuf.so.3.0 00:02:38.940 SYMLINK libspdk_event_keyring.so 00:02:38.940 SYMLINK libspdk_event_scheduler.so 00:02:38.940 SYMLINK libspdk_event_vhost_blk.so 00:02:38.940 SYMLINK libspdk_event_vmd.so 00:02:38.940 SYMLINK libspdk_event_iobuf.so 00:02:38.940 SYMLINK libspdk_event_sock.so 00:02:39.526 CC module/event/subsystems/accel/accel.o 00:02:39.526 LIB libspdk_event_accel.a 00:02:39.526 SO libspdk_event_accel.so.6.0 00:02:39.784 SYMLINK libspdk_event_accel.so 00:02:40.041 CC module/event/subsystems/bdev/bdev.o 00:02:40.300 LIB libspdk_event_bdev.a 00:02:40.300 SO libspdk_event_bdev.so.6.0 00:02:40.300 SYMLINK libspdk_event_bdev.so 00:02:40.867 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:40.867 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:40.867 CC module/event/subsystems/ublk/ublk.o 00:02:40.867 CC module/event/subsystems/scsi/scsi.o 00:02:40.867 CC module/event/subsystems/nbd/nbd.o 00:02:40.867 LIB libspdk_event_ublk.a 00:02:40.867 LIB libspdk_event_nbd.a 00:02:40.867 LIB libspdk_event_scsi.a 00:02:40.867 SO libspdk_event_ublk.so.3.0 00:02:40.867 LIB libspdk_event_nvmf.a 00:02:40.867 SO libspdk_event_nbd.so.6.0 00:02:40.867 SO libspdk_event_scsi.so.6.0 00:02:41.126 SO libspdk_event_nvmf.so.6.0 00:02:41.126 SYMLINK libspdk_event_ublk.so 00:02:41.126 SYMLINK libspdk_event_scsi.so 00:02:41.126 SYMLINK libspdk_event_nbd.so 00:02:41.126 SYMLINK libspdk_event_nvmf.so 00:02:41.386 CC module/event/subsystems/iscsi/iscsi.o 00:02:41.386 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:41.644 LIB libspdk_event_vhost_scsi.a 00:02:41.644 LIB libspdk_event_iscsi.a 00:02:41.644 SO libspdk_event_vhost_scsi.so.3.0 00:02:41.644 SO libspdk_event_iscsi.so.6.0 00:02:41.644 SYMLINK libspdk_event_vhost_scsi.so 00:02:41.644 SYMLINK libspdk_event_iscsi.so 00:02:41.902 SO libspdk.so.6.0 00:02:41.902 SYMLINK libspdk.so 00:02:42.162 CXX app/trace/trace.o 00:02:42.162 CC app/trace_record/trace_record.o 00:02:42.162 CC app/spdk_lspci/spdk_lspci.o 00:02:42.162 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:42.162 CC app/nvmf_tgt/nvmf_main.o 00:02:42.162 CC app/iscsi_tgt/iscsi_tgt.o 00:02:42.162 CC app/spdk_tgt/spdk_tgt.o 00:02:42.421 CC examples/util/zipf/zipf.o 00:02:42.421 CC examples/ioat/perf/perf.o 00:02:42.421 CC test/thread/poller_perf/poller_perf.o 00:02:42.421 LINK spdk_lspci 00:02:42.421 LINK interrupt_tgt 00:02:42.421 LINK zipf 00:02:42.421 LINK spdk_trace_record 00:02:42.421 LINK iscsi_tgt 00:02:42.421 LINK poller_perf 00:02:42.421 LINK nvmf_tgt 00:02:42.421 LINK spdk_tgt 00:02:42.421 LINK ioat_perf 00:02:42.681 LINK spdk_trace 00:02:42.681 CC app/spdk_nvme_perf/perf.o 00:02:42.681 CC examples/ioat/verify/verify.o 00:02:42.681 CC app/spdk_nvme_identify/identify.o 00:02:42.681 CC app/spdk_nvme_discover/discovery_aer.o 00:02:42.681 CC app/spdk_top/spdk_top.o 00:02:42.940 CC examples/thread/thread/thread_ex.o 00:02:42.940 CC examples/sock/hello_world/hello_sock.o 00:02:42.940 CC test/dma/test_dma/test_dma.o 00:02:42.940 CC test/app/bdev_svc/bdev_svc.o 00:02:42.940 LINK verify 00:02:42.940 CC examples/vmd/lsvmd/lsvmd.o 00:02:42.940 LINK spdk_nvme_discover 00:02:42.940 LINK bdev_svc 00:02:42.940 LINK lsvmd 00:02:42.940 LINK hello_sock 00:02:42.940 LINK thread 00:02:43.200 LINK test_dma 00:02:43.200 CC examples/idxd/perf/perf.o 00:02:43.200 CC app/spdk_dd/spdk_dd.o 00:02:43.200 CC examples/vmd/led/led.o 00:02:43.460 CC test/app/histogram_perf/histogram_perf.o 00:02:43.460 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:43.460 LINK spdk_nvme_perf 00:02:43.460 CC examples/nvme/hello_world/hello_world.o 00:02:43.460 LINK led 00:02:43.460 LINK spdk_nvme_identify 00:02:43.460 LINK histogram_perf 00:02:43.460 CC examples/nvme/reconnect/reconnect.o 00:02:43.460 LINK idxd_perf 00:02:43.460 LINK spdk_top 00:02:43.718 LINK spdk_dd 00:02:43.718 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:43.718 LINK hello_world 00:02:43.718 CC examples/nvme/arbitration/arbitration.o 00:02:43.718 CC examples/nvme/hotplug/hotplug.o 00:02:43.718 LINK nvme_fuzz 00:02:43.718 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:43.718 CC test/app/jsoncat/jsoncat.o 00:02:43.718 CC test/app/stub/stub.o 00:02:43.718 LINK reconnect 00:02:43.976 LINK jsoncat 00:02:43.976 LINK cmb_copy 00:02:43.976 LINK hotplug 00:02:43.976 TEST_HEADER include/spdk/accel.h 00:02:43.976 TEST_HEADER include/spdk/accel_module.h 00:02:43.976 TEST_HEADER include/spdk/assert.h 00:02:43.976 TEST_HEADER include/spdk/barrier.h 00:02:43.976 LINK stub 00:02:43.976 TEST_HEADER include/spdk/base64.h 00:02:43.976 TEST_HEADER include/spdk/bdev.h 00:02:43.976 TEST_HEADER include/spdk/bdev_module.h 00:02:43.976 TEST_HEADER include/spdk/bdev_zone.h 00:02:43.976 TEST_HEADER include/spdk/bit_array.h 00:02:43.976 TEST_HEADER include/spdk/bit_pool.h 00:02:43.976 TEST_HEADER include/spdk/blob_bdev.h 00:02:43.976 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:43.976 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:43.976 TEST_HEADER include/spdk/blobfs.h 00:02:43.976 TEST_HEADER include/spdk/blob.h 00:02:43.976 TEST_HEADER include/spdk/conf.h 00:02:43.976 TEST_HEADER include/spdk/config.h 00:02:43.976 LINK arbitration 00:02:43.976 TEST_HEADER include/spdk/cpuset.h 00:02:43.976 TEST_HEADER include/spdk/crc16.h 00:02:43.976 TEST_HEADER include/spdk/crc32.h 00:02:43.976 TEST_HEADER include/spdk/crc64.h 00:02:43.976 TEST_HEADER include/spdk/dif.h 00:02:43.976 TEST_HEADER include/spdk/dma.h 00:02:43.976 TEST_HEADER include/spdk/endian.h 00:02:43.976 TEST_HEADER include/spdk/env_dpdk.h 00:02:43.976 TEST_HEADER include/spdk/env.h 00:02:43.977 TEST_HEADER include/spdk/event.h 00:02:43.977 TEST_HEADER include/spdk/fd_group.h 00:02:43.977 TEST_HEADER include/spdk/fd.h 00:02:43.977 TEST_HEADER include/spdk/file.h 00:02:43.977 TEST_HEADER include/spdk/ftl.h 00:02:43.977 TEST_HEADER include/spdk/gpt_spec.h 00:02:43.977 TEST_HEADER include/spdk/hexlify.h 00:02:43.977 TEST_HEADER include/spdk/histogram_data.h 00:02:43.977 TEST_HEADER include/spdk/idxd.h 00:02:43.977 CC app/fio/nvme/fio_plugin.o 00:02:43.977 TEST_HEADER include/spdk/idxd_spec.h 00:02:43.977 TEST_HEADER include/spdk/init.h 00:02:43.977 TEST_HEADER include/spdk/ioat.h 00:02:43.977 TEST_HEADER include/spdk/ioat_spec.h 00:02:43.977 TEST_HEADER include/spdk/iscsi_spec.h 00:02:43.977 TEST_HEADER include/spdk/json.h 00:02:43.977 TEST_HEADER include/spdk/jsonrpc.h 00:02:43.977 TEST_HEADER include/spdk/keyring.h 00:02:43.977 TEST_HEADER include/spdk/keyring_module.h 00:02:43.977 TEST_HEADER include/spdk/likely.h 00:02:43.977 TEST_HEADER include/spdk/log.h 00:02:43.977 TEST_HEADER include/spdk/lvol.h 00:02:43.977 TEST_HEADER include/spdk/memory.h 00:02:43.977 TEST_HEADER include/spdk/mmio.h 00:02:43.977 TEST_HEADER include/spdk/nbd.h 00:02:43.977 TEST_HEADER include/spdk/net.h 00:02:43.977 TEST_HEADER include/spdk/notify.h 00:02:43.977 TEST_HEADER include/spdk/nvme.h 00:02:43.977 TEST_HEADER include/spdk/nvme_intel.h 00:02:43.977 LINK nvme_manage 00:02:43.977 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:43.977 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:43.977 TEST_HEADER include/spdk/nvme_spec.h 00:02:43.977 CC app/fio/bdev/fio_plugin.o 00:02:43.977 TEST_HEADER include/spdk/nvme_zns.h 00:02:43.977 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:43.977 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:43.977 TEST_HEADER include/spdk/nvmf.h 00:02:43.977 TEST_HEADER include/spdk/nvmf_spec.h 00:02:43.977 TEST_HEADER include/spdk/nvmf_transport.h 00:02:43.977 TEST_HEADER include/spdk/opal.h 00:02:43.977 TEST_HEADER include/spdk/opal_spec.h 00:02:43.977 TEST_HEADER include/spdk/pci_ids.h 00:02:43.977 TEST_HEADER include/spdk/pipe.h 00:02:44.236 TEST_HEADER include/spdk/queue.h 00:02:44.236 TEST_HEADER include/spdk/reduce.h 00:02:44.236 TEST_HEADER include/spdk/rpc.h 00:02:44.236 TEST_HEADER include/spdk/scheduler.h 00:02:44.236 TEST_HEADER include/spdk/scsi.h 00:02:44.236 TEST_HEADER include/spdk/scsi_spec.h 00:02:44.236 TEST_HEADER include/spdk/sock.h 00:02:44.236 TEST_HEADER include/spdk/stdinc.h 00:02:44.236 TEST_HEADER include/spdk/string.h 00:02:44.236 TEST_HEADER include/spdk/thread.h 00:02:44.236 TEST_HEADER include/spdk/trace.h 00:02:44.236 TEST_HEADER include/spdk/trace_parser.h 00:02:44.236 TEST_HEADER include/spdk/tree.h 00:02:44.236 TEST_HEADER include/spdk/ublk.h 00:02:44.236 TEST_HEADER include/spdk/util.h 00:02:44.236 TEST_HEADER include/spdk/uuid.h 00:02:44.236 TEST_HEADER include/spdk/version.h 00:02:44.236 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:44.236 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:44.236 TEST_HEADER include/spdk/vhost.h 00:02:44.236 TEST_HEADER include/spdk/vmd.h 00:02:44.236 TEST_HEADER include/spdk/xor.h 00:02:44.236 TEST_HEADER include/spdk/zipf.h 00:02:44.236 CXX test/cpp_headers/accel.o 00:02:44.236 CXX test/cpp_headers/accel_module.o 00:02:44.236 CC examples/nvme/abort/abort.o 00:02:44.236 CC app/vhost/vhost.o 00:02:44.236 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:44.236 CC examples/accel/perf/accel_perf.o 00:02:44.236 CXX test/cpp_headers/assert.o 00:02:44.236 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:44.236 LINK pmr_persistence 00:02:44.236 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:44.236 LINK vhost 00:02:44.495 CXX test/cpp_headers/barrier.o 00:02:44.495 CXX test/cpp_headers/base64.o 00:02:44.495 LINK spdk_nvme 00:02:44.495 LINK abort 00:02:44.495 LINK spdk_bdev 00:02:44.495 CXX test/cpp_headers/bdev.o 00:02:44.495 CXX test/cpp_headers/bdev_module.o 00:02:44.495 CXX test/cpp_headers/bdev_zone.o 00:02:44.754 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:44.754 LINK accel_perf 00:02:44.754 CC test/env/vtophys/vtophys.o 00:02:44.754 LINK vhost_fuzz 00:02:44.754 CXX test/cpp_headers/bit_array.o 00:02:44.754 CXX test/cpp_headers/bit_pool.o 00:02:44.754 CC test/env/memory/memory_ut.o 00:02:44.754 CC test/env/mem_callbacks/mem_callbacks.o 00:02:44.754 CXX test/cpp_headers/blob_bdev.o 00:02:44.754 LINK vtophys 00:02:44.754 LINK env_dpdk_post_init 00:02:45.013 CXX test/cpp_headers/blobfs_bdev.o 00:02:45.013 CXX test/cpp_headers/blobfs.o 00:02:45.013 CXX test/cpp_headers/blob.o 00:02:45.013 CXX test/cpp_headers/conf.o 00:02:45.013 CC test/env/pci/pci_ut.o 00:02:45.013 CXX test/cpp_headers/config.o 00:02:45.013 CXX test/cpp_headers/cpuset.o 00:02:45.271 CC examples/blob/hello_world/hello_blob.o 00:02:45.271 CXX test/cpp_headers/crc16.o 00:02:45.271 CC test/event/event_perf/event_perf.o 00:02:45.271 CC examples/blob/cli/blobcli.o 00:02:45.271 CC test/event/reactor/reactor.o 00:02:45.271 LINK mem_callbacks 00:02:45.271 LINK event_perf 00:02:45.271 CC test/event/reactor_perf/reactor_perf.o 00:02:45.271 CXX test/cpp_headers/crc32.o 00:02:45.271 LINK iscsi_fuzz 00:02:45.271 LINK pci_ut 00:02:45.271 LINK hello_blob 00:02:45.271 LINK reactor 00:02:45.566 LINK reactor_perf 00:02:45.566 CXX test/cpp_headers/crc64.o 00:02:45.566 CC test/event/app_repeat/app_repeat.o 00:02:45.566 CXX test/cpp_headers/dif.o 00:02:45.566 CXX test/cpp_headers/dma.o 00:02:45.566 LINK blobcli 00:02:45.566 CC test/event/scheduler/scheduler.o 00:02:45.566 CXX test/cpp_headers/endian.o 00:02:45.566 LINK app_repeat 00:02:45.566 CC test/rpc_client/rpc_client_test.o 00:02:45.824 CXX test/cpp_headers/env_dpdk.o 00:02:45.824 LINK memory_ut 00:02:45.824 CC test/nvme/aer/aer.o 00:02:45.824 LINK scheduler 00:02:45.824 CXX test/cpp_headers/env.o 00:02:45.824 CC examples/bdev/hello_world/hello_bdev.o 00:02:45.824 CC examples/bdev/bdevperf/bdevperf.o 00:02:45.824 CC test/nvme/reset/reset.o 00:02:45.824 LINK rpc_client_test 00:02:46.083 CXX test/cpp_headers/event.o 00:02:46.083 CC test/accel/dif/dif.o 00:02:46.083 LINK aer 00:02:46.083 CC test/blobfs/mkfs/mkfs.o 00:02:46.083 LINK hello_bdev 00:02:46.083 CC test/nvme/sgl/sgl.o 00:02:46.083 LINK reset 00:02:46.083 CC test/lvol/esnap/esnap.o 00:02:46.083 CC test/nvme/e2edp/nvme_dp.o 00:02:46.083 CXX test/cpp_headers/fd_group.o 00:02:46.083 LINK mkfs 00:02:46.342 CC test/nvme/overhead/overhead.o 00:02:46.342 CXX test/cpp_headers/fd.o 00:02:46.342 CC test/nvme/err_injection/err_injection.o 00:02:46.342 LINK sgl 00:02:46.342 CC test/nvme/startup/startup.o 00:02:46.342 LINK dif 00:02:46.342 LINK nvme_dp 00:02:46.342 CXX test/cpp_headers/file.o 00:02:46.600 LINK bdevperf 00:02:46.600 CC test/nvme/reserve/reserve.o 00:02:46.600 LINK err_injection 00:02:46.600 LINK overhead 00:02:46.600 LINK startup 00:02:46.600 CXX test/cpp_headers/ftl.o 00:02:46.600 CXX test/cpp_headers/gpt_spec.o 00:02:46.600 CC test/nvme/simple_copy/simple_copy.o 00:02:46.600 CC test/nvme/connect_stress/connect_stress.o 00:02:46.600 CXX test/cpp_headers/hexlify.o 00:02:46.600 LINK reserve 00:02:46.859 CC test/nvme/compliance/nvme_compliance.o 00:02:46.859 CC test/nvme/boot_partition/boot_partition.o 00:02:46.859 LINK simple_copy 00:02:46.859 LINK connect_stress 00:02:46.859 CXX test/cpp_headers/histogram_data.o 00:02:46.859 CC test/nvme/fused_ordering/fused_ordering.o 00:02:46.859 CC examples/nvmf/nvmf/nvmf.o 00:02:46.859 LINK boot_partition 00:02:46.859 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:46.859 CC test/bdev/bdevio/bdevio.o 00:02:46.859 CXX test/cpp_headers/idxd.o 00:02:47.118 LINK fused_ordering 00:02:47.118 LINK nvme_compliance 00:02:47.118 CC test/nvme/cuse/cuse.o 00:02:47.118 CC test/nvme/fdp/fdp.o 00:02:47.118 CXX test/cpp_headers/idxd_spec.o 00:02:47.118 LINK doorbell_aers 00:02:47.118 CXX test/cpp_headers/init.o 00:02:47.118 LINK nvmf 00:02:47.118 CXX test/cpp_headers/ioat.o 00:02:47.118 CXX test/cpp_headers/ioat_spec.o 00:02:47.376 CXX test/cpp_headers/iscsi_spec.o 00:02:47.376 CXX test/cpp_headers/json.o 00:02:47.376 CXX test/cpp_headers/jsonrpc.o 00:02:47.376 LINK bdevio 00:02:47.376 CXX test/cpp_headers/keyring.o 00:02:47.376 CXX test/cpp_headers/keyring_module.o 00:02:47.376 CXX test/cpp_headers/likely.o 00:02:47.376 LINK fdp 00:02:47.376 CXX test/cpp_headers/log.o 00:02:47.376 CXX test/cpp_headers/lvol.o 00:02:47.376 CXX test/cpp_headers/memory.o 00:02:47.376 CXX test/cpp_headers/mmio.o 00:02:47.376 CXX test/cpp_headers/nbd.o 00:02:47.635 CXX test/cpp_headers/net.o 00:02:47.635 CXX test/cpp_headers/notify.o 00:02:47.635 CXX test/cpp_headers/nvme.o 00:02:47.635 CXX test/cpp_headers/nvme_intel.o 00:02:47.635 CXX test/cpp_headers/nvme_ocssd.o 00:02:47.635 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:47.635 CXX test/cpp_headers/nvme_spec.o 00:02:47.635 CXX test/cpp_headers/nvme_zns.o 00:02:47.635 CXX test/cpp_headers/nvmf_cmd.o 00:02:47.635 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:47.635 CXX test/cpp_headers/nvmf.o 00:02:47.635 CXX test/cpp_headers/nvmf_spec.o 00:02:47.635 CXX test/cpp_headers/nvmf_transport.o 00:02:47.893 CXX test/cpp_headers/opal.o 00:02:47.893 CXX test/cpp_headers/opal_spec.o 00:02:47.893 CXX test/cpp_headers/pci_ids.o 00:02:47.893 CXX test/cpp_headers/pipe.o 00:02:47.893 CXX test/cpp_headers/queue.o 00:02:47.893 CXX test/cpp_headers/reduce.o 00:02:47.893 CXX test/cpp_headers/rpc.o 00:02:47.893 CXX test/cpp_headers/scheduler.o 00:02:47.893 CXX test/cpp_headers/scsi.o 00:02:47.893 CXX test/cpp_headers/scsi_spec.o 00:02:47.893 CXX test/cpp_headers/sock.o 00:02:48.152 CXX test/cpp_headers/stdinc.o 00:02:48.152 CXX test/cpp_headers/string.o 00:02:48.152 CXX test/cpp_headers/thread.o 00:02:48.152 CXX test/cpp_headers/trace.o 00:02:48.152 CXX test/cpp_headers/trace_parser.o 00:02:48.152 LINK cuse 00:02:48.152 CXX test/cpp_headers/tree.o 00:02:48.152 CXX test/cpp_headers/ublk.o 00:02:48.152 CXX test/cpp_headers/util.o 00:02:48.152 CXX test/cpp_headers/uuid.o 00:02:48.152 CXX test/cpp_headers/version.o 00:02:48.152 CXX test/cpp_headers/vfio_user_pci.o 00:02:48.152 CXX test/cpp_headers/vfio_user_spec.o 00:02:48.152 CXX test/cpp_headers/vhost.o 00:02:48.152 CXX test/cpp_headers/vmd.o 00:02:48.152 CXX test/cpp_headers/xor.o 00:02:48.411 CXX test/cpp_headers/zipf.o 00:02:50.352 LINK esnap 00:02:50.610 00:02:50.610 real 1m1.079s 00:02:50.610 user 5m7.485s 00:02:50.610 sys 1m44.974s 00:02:50.610 20:39:12 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:50.611 20:39:12 make -- common/autotest_common.sh@10 -- $ set +x 00:02:50.611 ************************************ 00:02:50.611 END TEST make 00:02:50.611 ************************************ 00:02:50.611 20:39:12 -- common/autotest_common.sh@1142 -- $ return 0 00:02:50.611 20:39:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:50.611 20:39:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:50.611 20:39:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:50.611 20:39:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.611 20:39:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:50.611 20:39:12 -- pm/common@44 -- $ pid=5139 00:02:50.611 20:39:12 -- pm/common@50 -- $ kill -TERM 5139 00:02:50.611 20:39:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.611 20:39:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:50.611 20:39:12 -- pm/common@44 -- $ pid=5141 00:02:50.611 20:39:12 -- pm/common@50 -- $ kill -TERM 5141 00:02:50.870 20:39:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:50.870 20:39:12 -- nvmf/common.sh@7 -- # uname -s 00:02:50.870 20:39:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:50.870 20:39:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:50.870 20:39:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:50.870 20:39:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:50.870 20:39:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:50.870 20:39:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:50.870 20:39:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:50.870 20:39:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:50.870 20:39:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:50.870 20:39:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:50.870 20:39:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:02:50.870 20:39:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:02:50.870 20:39:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:50.870 20:39:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:50.870 20:39:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:02:50.870 20:39:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:50.870 20:39:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:50.870 20:39:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:50.870 20:39:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.870 20:39:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.870 20:39:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.870 20:39:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.870 20:39:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.870 20:39:12 -- paths/export.sh@5 -- # export PATH 00:02:50.870 20:39:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.870 20:39:12 -- nvmf/common.sh@47 -- # : 0 00:02:50.870 20:39:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:50.870 20:39:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:50.870 20:39:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:50.870 20:39:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:50.870 20:39:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:50.870 20:39:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:50.870 20:39:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:50.870 20:39:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:50.870 20:39:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:50.870 20:39:12 -- spdk/autotest.sh@32 -- # uname -s 00:02:50.870 20:39:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:50.870 20:39:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:50.870 20:39:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:50.870 20:39:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:50.870 20:39:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:50.870 20:39:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:50.870 20:39:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:50.870 20:39:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:50.870 20:39:12 -- spdk/autotest.sh@48 -- # udevadm_pid=52766 00:02:50.870 20:39:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:50.870 20:39:12 -- pm/common@17 -- # local monitor 00:02:50.870 20:39:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:50.870 20:39:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.870 20:39:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.870 20:39:12 -- pm/common@21 -- # date +%s 00:02:50.870 20:39:12 -- pm/common@25 -- # sleep 1 00:02:50.870 20:39:12 -- pm/common@21 -- # date +%s 00:02:50.870 20:39:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721075952 00:02:50.870 20:39:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721075952 00:02:50.870 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721075952_collect-vmstat.pm.log 00:02:50.870 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721075952_collect-cpu-load.pm.log 00:02:52.249 20:39:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:52.249 20:39:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:52.249 20:39:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:52.249 20:39:13 -- common/autotest_common.sh@10 -- # set +x 00:02:52.249 20:39:13 -- spdk/autotest.sh@59 -- # create_test_list 00:02:52.249 20:39:13 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:52.249 20:39:13 -- common/autotest_common.sh@10 -- # set +x 00:02:52.249 20:39:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:52.249 20:39:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:52.249 20:39:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:52.249 20:39:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:52.249 20:39:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:52.249 20:39:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:52.249 20:39:13 -- common/autotest_common.sh@1455 -- # uname 00:02:52.249 20:39:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:52.249 20:39:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:52.249 20:39:13 -- common/autotest_common.sh@1475 -- # uname 00:02:52.249 20:39:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:52.249 20:39:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:52.249 20:39:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:52.249 20:39:13 -- spdk/autotest.sh@72 -- # hash lcov 00:02:52.249 20:39:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:52.249 20:39:13 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:52.249 --rc lcov_branch_coverage=1 00:02:52.249 --rc lcov_function_coverage=1 00:02:52.249 --rc genhtml_branch_coverage=1 00:02:52.249 --rc genhtml_function_coverage=1 00:02:52.249 --rc genhtml_legend=1 00:02:52.249 --rc geninfo_all_blocks=1 00:02:52.249 ' 00:02:52.249 20:39:13 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:52.249 --rc lcov_branch_coverage=1 00:02:52.249 --rc lcov_function_coverage=1 00:02:52.249 --rc genhtml_branch_coverage=1 00:02:52.249 --rc genhtml_function_coverage=1 00:02:52.249 --rc genhtml_legend=1 00:02:52.249 --rc geninfo_all_blocks=1 00:02:52.249 ' 00:02:52.249 20:39:13 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:52.249 --rc lcov_branch_coverage=1 00:02:52.249 --rc lcov_function_coverage=1 00:02:52.249 --rc genhtml_branch_coverage=1 00:02:52.249 --rc genhtml_function_coverage=1 00:02:52.249 --rc genhtml_legend=1 00:02:52.249 --rc geninfo_all_blocks=1 00:02:52.249 --no-external' 00:02:52.249 20:39:13 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:52.249 --rc lcov_branch_coverage=1 00:02:52.249 --rc lcov_function_coverage=1 00:02:52.249 --rc genhtml_branch_coverage=1 00:02:52.249 --rc genhtml_function_coverage=1 00:02:52.249 --rc genhtml_legend=1 00:02:52.249 --rc geninfo_all_blocks=1 00:02:52.250 --no-external' 00:02:52.250 20:39:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:52.250 lcov: LCOV version 1.14 00:02:52.250 20:39:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:07.151 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:07.151 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:19.354 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:19.354 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:19.355 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:19.355 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:22.639 20:39:44 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:22.639 20:39:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:22.639 20:39:44 -- common/autotest_common.sh@10 -- # set +x 00:03:22.639 20:39:44 -- spdk/autotest.sh@91 -- # rm -f 00:03:22.639 20:39:44 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:23.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:23.466 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:23.466 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:23.466 20:39:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:23.466 20:39:45 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:23.466 20:39:45 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:23.466 20:39:45 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:23.466 20:39:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:23.466 20:39:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:23.466 20:39:45 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:23.466 20:39:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:23.467 20:39:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:23.467 20:39:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:23.467 20:39:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:23.467 20:39:45 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:23.467 20:39:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:23.467 20:39:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:23.467 20:39:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:23.467 20:39:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:23.467 20:39:45 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:23.467 20:39:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:23.467 20:39:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:23.467 20:39:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:23.467 20:39:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:23.467 20:39:45 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:23.467 20:39:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:23.467 20:39:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:23.467 20:39:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:23.467 20:39:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:23.467 20:39:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:23.467 20:39:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:23.467 20:39:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:23.467 20:39:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:23.467 No valid GPT data, bailing 00:03:23.467 20:39:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:23.467 20:39:45 -- scripts/common.sh@391 -- # pt= 00:03:23.467 20:39:45 -- scripts/common.sh@392 -- # return 1 00:03:23.467 20:39:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:23.467 1+0 records in 00:03:23.467 1+0 records out 00:03:23.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00549586 s, 191 MB/s 00:03:23.467 20:39:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:23.467 20:39:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:23.467 20:39:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:23.467 20:39:45 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:23.467 20:39:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:23.467 No valid GPT data, bailing 00:03:23.726 20:39:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:23.726 20:39:45 -- scripts/common.sh@391 -- # pt= 00:03:23.726 20:39:45 -- scripts/common.sh@392 -- # return 1 00:03:23.726 20:39:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:23.726 1+0 records in 00:03:23.726 1+0 records out 00:03:23.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00579837 s, 181 MB/s 00:03:23.726 20:39:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:23.726 20:39:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:23.726 20:39:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:23.726 20:39:45 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:23.726 20:39:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:23.726 No valid GPT data, bailing 00:03:23.726 20:39:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:23.726 20:39:45 -- scripts/common.sh@391 -- # pt= 00:03:23.726 20:39:45 -- scripts/common.sh@392 -- # return 1 00:03:23.726 20:39:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:23.726 1+0 records in 00:03:23.726 1+0 records out 00:03:23.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394383 s, 266 MB/s 00:03:23.726 20:39:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:23.726 20:39:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:23.726 20:39:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:23.727 20:39:45 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:23.727 20:39:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:23.727 No valid GPT data, bailing 00:03:23.727 20:39:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:23.727 20:39:45 -- scripts/common.sh@391 -- # pt= 00:03:23.727 20:39:45 -- scripts/common.sh@392 -- # return 1 00:03:23.727 20:39:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:23.727 1+0 records in 00:03:23.727 1+0 records out 00:03:23.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00736296 s, 142 MB/s 00:03:23.727 20:39:45 -- spdk/autotest.sh@118 -- # sync 00:03:23.727 20:39:45 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:23.727 20:39:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:23.727 20:39:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:26.318 20:39:47 -- spdk/autotest.sh@124 -- # uname -s 00:03:26.318 20:39:47 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:26.318 20:39:47 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:26.318 20:39:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.318 20:39:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.318 20:39:47 -- common/autotest_common.sh@10 -- # set +x 00:03:26.318 ************************************ 00:03:26.318 START TEST setup.sh 00:03:26.318 ************************************ 00:03:26.318 20:39:47 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:26.318 * Looking for test storage... 00:03:26.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:26.318 20:39:47 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:26.318 20:39:47 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:26.318 20:39:47 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:26.318 20:39:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.318 20:39:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.318 20:39:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:26.318 ************************************ 00:03:26.318 START TEST acl 00:03:26.318 ************************************ 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:26.318 * Looking for test storage... 00:03:26.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:26.318 20:39:48 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:26.318 20:39:48 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:26.318 20:39:48 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:26.318 20:39:48 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:26.318 20:39:48 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:26.318 20:39:48 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:26.318 20:39:48 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:26.318 20:39:48 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.318 20:39:48 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:27.253 20:39:49 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:27.253 20:39:49 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:27.253 20:39:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.253 20:39:49 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:27.253 20:39:49 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.253 20:39:49 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:28.185 20:39:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:28.185 20:39:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:28.185 20:39:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.185 Hugepages 00:03:28.185 node hugesize free / total 00:03:28.185 20:39:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:28.185 20:39:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:28.185 20:39:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.185 00:03:28.185 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:28.185 20:39:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:28.185 20:39:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:28.185 20:39:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.185 20:39:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:28.185 20:39:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:28.185 20:39:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:28.185 20:39:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:28.444 20:39:50 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:28.444 20:39:50 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.444 20:39:50 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.444 20:39:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:28.444 ************************************ 00:03:28.444 START TEST denied 00:03:28.444 ************************************ 00:03:28.444 20:39:50 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:28.444 20:39:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:28.444 20:39:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:28.444 20:39:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.444 20:39:50 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:28.444 20:39:50 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:29.404 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:29.404 20:39:51 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:29.404 20:39:51 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:29.404 20:39:51 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:29.404 20:39:51 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:29.404 20:39:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:29.404 20:39:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:29.404 20:39:51 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:29.404 20:39:51 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:29.404 20:39:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.404 20:39:51 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:30.362 00:03:30.362 real 0m1.851s 00:03:30.362 user 0m0.674s 00:03:30.362 sys 0m1.169s 00:03:30.362 20:39:52 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.362 20:39:52 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:30.362 ************************************ 00:03:30.362 END TEST denied 00:03:30.362 ************************************ 00:03:30.362 20:39:52 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:30.362 20:39:52 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:30.362 20:39:52 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.362 20:39:52 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.362 20:39:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:30.362 ************************************ 00:03:30.362 START TEST allowed 00:03:30.362 ************************************ 00:03:30.362 20:39:52 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:30.362 20:39:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:30.362 20:39:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:30.362 20:39:52 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:30.362 20:39:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.362 20:39:52 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:31.299 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:31.299 20:39:53 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:31.299 20:39:53 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:31.299 20:39:53 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:31.299 20:39:53 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:31.299 20:39:53 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:31.299 20:39:53 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:31.299 20:39:53 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:31.299 20:39:53 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:31.299 20:39:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.299 20:39:53 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:32.236 00:03:32.236 real 0m1.949s 00:03:32.236 user 0m0.766s 00:03:32.236 sys 0m1.205s 00:03:32.236 20:39:54 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.236 20:39:54 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:32.236 ************************************ 00:03:32.236 END TEST allowed 00:03:32.236 ************************************ 00:03:32.496 20:39:54 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:32.496 ************************************ 00:03:32.496 END TEST acl 00:03:32.496 ************************************ 00:03:32.496 00:03:32.496 real 0m6.170s 00:03:32.496 user 0m2.436s 00:03:32.496 sys 0m3.788s 00:03:32.496 20:39:54 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.496 20:39:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:32.496 20:39:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:32.496 20:39:54 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:32.496 20:39:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.496 20:39:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.496 20:39:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:32.496 ************************************ 00:03:32.496 START TEST hugepages 00:03:32.496 ************************************ 00:03:32.496 20:39:54 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:32.496 * Looking for test storage... 00:03:32.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 6009388 kB' 'MemAvailable: 7389424 kB' 'Buffers: 2436 kB' 'Cached: 1594272 kB' 'SwapCached: 0 kB' 'Active: 442344 kB' 'Inactive: 1265376 kB' 'Active(anon): 121500 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 112668 kB' 'Mapped: 48720 kB' 'Shmem: 10488 kB' 'KReclaimable: 61512 kB' 'Slab: 135460 kB' 'SReclaimable: 61512 kB' 'SUnreclaim: 73948 kB' 'KernelStack: 6316 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 333976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.496 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.497 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:32.498 20:39:54 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.757 20:39:54 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:32.757 20:39:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.757 20:39:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.757 20:39:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.757 ************************************ 00:03:32.757 START TEST default_setup 00:03:32.757 ************************************ 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.757 20:39:54 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:33.696 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.697 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:33.697 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8102736 kB' 'MemAvailable: 9482664 kB' 'Buffers: 2436 kB' 'Cached: 1594260 kB' 'SwapCached: 0 kB' 'Active: 453772 kB' 'Inactive: 1265384 kB' 'Active(anon): 132928 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265384 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 124076 kB' 'Mapped: 48908 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135188 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73908 kB' 'KernelStack: 6320 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.697 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8102736 kB' 'MemAvailable: 9482664 kB' 'Buffers: 2436 kB' 'Cached: 1594260 kB' 'SwapCached: 0 kB' 'Active: 453812 kB' 'Inactive: 1265384 kB' 'Active(anon): 132968 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265384 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 124048 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135188 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73908 kB' 'KernelStack: 6336 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.698 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.699 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8102736 kB' 'MemAvailable: 9482664 kB' 'Buffers: 2436 kB' 'Cached: 1594260 kB' 'SwapCached: 0 kB' 'Active: 453480 kB' 'Inactive: 1265384 kB' 'Active(anon): 132636 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265384 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 124020 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135188 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73908 kB' 'KernelStack: 6320 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.700 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.701 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.962 nr_hugepages=1024 00:03:33.962 resv_hugepages=0 00:03:33.962 surplus_hugepages=0 00:03:33.962 anon_hugepages=0 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.962 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8102736 kB' 'MemAvailable: 9482664 kB' 'Buffers: 2436 kB' 'Cached: 1594260 kB' 'SwapCached: 0 kB' 'Active: 453796 kB' 'Inactive: 1265384 kB' 'Active(anon): 132952 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265384 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 124104 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135184 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73904 kB' 'KernelStack: 6336 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.963 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.964 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8102484 kB' 'MemUsed: 4139480 kB' 'SwapCached: 0 kB' 'Active: 453788 kB' 'Inactive: 1265384 kB' 'Active(anon): 132944 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265384 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1596696 kB' 'Mapped: 48720 kB' 'AnonPages: 124112 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61280 kB' 'Slab: 135180 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.965 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.966 node0=1024 expecting 1024 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.966 00:03:33.966 real 0m1.242s 00:03:33.966 user 0m0.512s 00:03:33.966 sys 0m0.694s 00:03:33.966 ************************************ 00:03:33.966 END TEST default_setup 00:03:33.966 ************************************ 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.966 20:39:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:33.966 20:39:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:33.966 20:39:55 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:33.966 20:39:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.966 20:39:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.966 20:39:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.966 ************************************ 00:03:33.966 START TEST per_node_1G_alloc 00:03:33.966 ************************************ 00:03:33.966 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:33.966 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:33.966 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:33.966 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:33.966 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.967 20:39:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:34.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.537 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.537 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9151820 kB' 'MemAvailable: 10531752 kB' 'Buffers: 2436 kB' 'Cached: 1594260 kB' 'SwapCached: 0 kB' 'Active: 453984 kB' 'Inactive: 1265388 kB' 'Active(anon): 133140 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 124248 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135160 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73880 kB' 'KernelStack: 6292 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.537 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.538 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9152124 kB' 'MemAvailable: 10532056 kB' 'Buffers: 2436 kB' 'Cached: 1594260 kB' 'SwapCached: 0 kB' 'Active: 453792 kB' 'Inactive: 1265388 kB' 'Active(anon): 132948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 124060 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135164 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73884 kB' 'KernelStack: 6336 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.539 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9152124 kB' 'MemAvailable: 10532056 kB' 'Buffers: 2436 kB' 'Cached: 1594260 kB' 'SwapCached: 0 kB' 'Active: 453780 kB' 'Inactive: 1265388 kB' 'Active(anon): 132936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 124048 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135164 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73884 kB' 'KernelStack: 6320 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.540 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.541 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.542 nr_hugepages=512 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:34.542 resv_hugepages=0 00:03:34.542 surplus_hugepages=0 00:03:34.542 anon_hugepages=0 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9152124 kB' 'MemAvailable: 10532056 kB' 'Buffers: 2436 kB' 'Cached: 1594260 kB' 'SwapCached: 0 kB' 'Active: 453780 kB' 'Inactive: 1265388 kB' 'Active(anon): 132936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 124048 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135164 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73884 kB' 'KernelStack: 6320 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:34.542 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.804 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.805 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9152124 kB' 'MemUsed: 3089840 kB' 'SwapCached: 0 kB' 'Active: 453724 kB' 'Inactive: 1265388 kB' 'Active(anon): 132880 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1596696 kB' 'Mapped: 48720 kB' 'AnonPages: 123992 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61280 kB' 'Slab: 135160 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.806 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.807 node0=512 expecting 512 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:34.807 00:03:34.807 real 0m0.727s 00:03:34.807 user 0m0.315s 00:03:34.807 sys 0m0.442s 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.807 ************************************ 00:03:34.807 END TEST per_node_1G_alloc 00:03:34.807 ************************************ 00:03:34.807 20:39:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.807 20:39:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:34.807 20:39:56 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:34.807 20:39:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.807 20:39:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.807 20:39:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.807 ************************************ 00:03:34.807 START TEST even_2G_alloc 00:03:34.807 ************************************ 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.807 20:39:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:35.377 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.377 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.377 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.377 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8107448 kB' 'MemAvailable: 9487384 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 453992 kB' 'Inactive: 1265392 kB' 'Active(anon): 133148 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124284 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135172 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73892 kB' 'KernelStack: 6308 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.378 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8107748 kB' 'MemAvailable: 9487684 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 453808 kB' 'Inactive: 1265392 kB' 'Active(anon): 132964 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124148 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135176 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73896 kB' 'KernelStack: 6336 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8107748 kB' 'MemAvailable: 9487684 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 453828 kB' 'Inactive: 1265392 kB' 'Active(anon): 132984 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124152 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135176 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73896 kB' 'KernelStack: 6336 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.382 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.382 nr_hugepages=1024 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.383 resv_hugepages=0 00:03:35.383 surplus_hugepages=0 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.383 anon_hugepages=0 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8107748 kB' 'MemAvailable: 9487684 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 453848 kB' 'Inactive: 1265392 kB' 'Active(anon): 133004 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124180 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135176 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73896 kB' 'KernelStack: 6352 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8107748 kB' 'MemUsed: 4134216 kB' 'SwapCached: 0 kB' 'Active: 453800 kB' 'Inactive: 1265392 kB' 'Active(anon): 132956 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1596700 kB' 'Mapped: 48720 kB' 'AnonPages: 124148 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61280 kB' 'Slab: 135172 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:35.386 node0=1024 expecting 1024 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.386 00:03:35.386 real 0m0.726s 00:03:35.386 user 0m0.351s 00:03:35.386 sys 0m0.425s 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.386 20:39:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.386 ************************************ 00:03:35.386 END TEST even_2G_alloc 00:03:35.386 ************************************ 00:03:35.644 20:39:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.644 20:39:57 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:35.644 20:39:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.644 20:39:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.644 20:39:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.644 ************************************ 00:03:35.644 START TEST odd_alloc 00:03:35.644 ************************************ 00:03:35.644 20:39:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:35.644 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:35.644 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:35.644 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.644 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.644 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:35.644 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.645 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:36.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.215 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.215 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8098744 kB' 'MemAvailable: 9478680 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 454068 kB' 'Inactive: 1265392 kB' 'Active(anon): 133224 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 124332 kB' 'Mapped: 48892 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135208 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73928 kB' 'KernelStack: 6308 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.215 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.216 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8098744 kB' 'MemAvailable: 9478680 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 453860 kB' 'Inactive: 1265392 kB' 'Active(anon): 133016 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 124168 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135204 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73924 kB' 'KernelStack: 6336 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.217 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.218 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8098744 kB' 'MemAvailable: 9478680 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 453916 kB' 'Inactive: 1265392 kB' 'Active(anon): 133072 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 124400 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135200 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73920 kB' 'KernelStack: 6384 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.219 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.220 nr_hugepages=1025 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:36.220 resv_hugepages=0 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.220 surplus_hugepages=0 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.220 anon_hugepages=0 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.220 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8098744 kB' 'MemAvailable: 9478680 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 453712 kB' 'Inactive: 1265392 kB' 'Active(anon): 132868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 124060 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 61280 kB' 'Slab: 135204 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73924 kB' 'KernelStack: 6320 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 346040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.221 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8098744 kB' 'MemUsed: 4143220 kB' 'SwapCached: 0 kB' 'Active: 453872 kB' 'Inactive: 1265392 kB' 'Active(anon): 133028 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1596700 kB' 'Mapped: 48720 kB' 'AnonPages: 124232 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61280 kB' 'Slab: 135200 kB' 'SReclaimable: 61280 kB' 'SUnreclaim: 73920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.222 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:36.223 node0=1025 expecting 1025 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:36.223 00:03:36.223 real 0m0.755s 00:03:36.223 user 0m0.370s 00:03:36.223 sys 0m0.430s 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.223 20:39:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.223 ************************************ 00:03:36.223 END TEST odd_alloc 00:03:36.223 ************************************ 00:03:36.482 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:36.482 20:39:58 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:36.482 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.482 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.482 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.482 ************************************ 00:03:36.482 START TEST custom_alloc 00:03:36.482 ************************************ 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.482 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:36.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:37.003 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.003 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9154712 kB' 'MemAvailable: 10534648 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 449196 kB' 'Inactive: 1265392 kB' 'Active(anon): 128352 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119552 kB' 'Mapped: 48224 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134940 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73664 kB' 'KernelStack: 6272 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.003 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9154712 kB' 'MemAvailable: 10534648 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 448816 kB' 'Inactive: 1265392 kB' 'Active(anon): 127972 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119172 kB' 'Mapped: 48100 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134936 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73660 kB' 'KernelStack: 6284 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.004 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9154712 kB' 'MemAvailable: 10534648 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 448856 kB' 'Inactive: 1265392 kB' 'Active(anon): 128012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119176 kB' 'Mapped: 48100 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134928 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73652 kB' 'KernelStack: 6284 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.005 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:37.006 nr_hugepages=512 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.006 resv_hugepages=0 00:03:37.006 surplus_hugepages=0 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.006 anon_hugepages=0 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.006 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9154712 kB' 'MemAvailable: 10534648 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 448760 kB' 'Inactive: 1265392 kB' 'Active(anon): 127916 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 119048 kB' 'Mapped: 48100 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134924 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73648 kB' 'KernelStack: 6252 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.007 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9154460 kB' 'MemUsed: 3087504 kB' 'SwapCached: 0 kB' 'Active: 448836 kB' 'Inactive: 1265392 kB' 'Active(anon): 127992 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1596700 kB' 'Mapped: 48100 kB' 'AnonPages: 119176 kB' 'Shmem: 10464 kB' 'KernelStack: 6284 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61276 kB' 'Slab: 134924 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:37.008 node0=512 expecting 512 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:37.008 00:03:37.008 real 0m0.742s 00:03:37.008 user 0m0.336s 00:03:37.008 sys 0m0.456s 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.008 20:39:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:37.008 ************************************ 00:03:37.008 END TEST custom_alloc 00:03:37.008 ************************************ 00:03:37.266 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:37.266 20:39:58 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:37.266 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.266 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.266 20:39:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:37.266 ************************************ 00:03:37.266 START TEST no_shrink_alloc 00:03:37.266 ************************************ 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.266 20:39:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:37.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:37.790 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.790 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8107536 kB' 'MemAvailable: 9487472 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 449260 kB' 'Inactive: 1265392 kB' 'Active(anon): 128416 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119600 kB' 'Mapped: 48108 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134880 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73604 kB' 'KernelStack: 6228 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.790 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8107788 kB' 'MemAvailable: 9487724 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 448788 kB' 'Inactive: 1265392 kB' 'Active(anon): 127944 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 119052 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134872 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73596 kB' 'KernelStack: 6208 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.791 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.792 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8107788 kB' 'MemAvailable: 9487724 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 448864 kB' 'Inactive: 1265392 kB' 'Active(anon): 128020 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 119172 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134872 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73596 kB' 'KernelStack: 6224 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.793 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.794 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.795 nr_hugepages=1024 00:03:37.795 resv_hugepages=0 00:03:37.795 surplus_hugepages=0 00:03:37.795 anon_hugepages=0 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8107788 kB' 'MemAvailable: 9487724 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 448780 kB' 'Inactive: 1265392 kB' 'Active(anon): 127936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 119040 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134872 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73596 kB' 'KernelStack: 6208 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.795 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.796 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8108048 kB' 'MemUsed: 4133916 kB' 'SwapCached: 0 kB' 'Active: 448960 kB' 'Inactive: 1265392 kB' 'Active(anon): 128116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1596700 kB' 'Mapped: 47984 kB' 'AnonPages: 119836 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61276 kB' 'Slab: 134872 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.797 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.798 node0=1024 expecting 1024 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.798 20:39:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:38.368 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:38.368 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:38.368 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:38.368 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:38.368 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:38.368 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:38.368 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.368 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.368 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:38.368 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:38.368 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:38.368 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.368 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.368 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.368 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8104712 kB' 'MemAvailable: 9484648 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 449324 kB' 'Inactive: 1265392 kB' 'Active(anon): 128480 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119600 kB' 'Mapped: 48112 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134792 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73516 kB' 'KernelStack: 6196 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.369 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.370 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8104712 kB' 'MemAvailable: 9484648 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 448980 kB' 'Inactive: 1265392 kB' 'Active(anon): 128136 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119276 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134760 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73484 kB' 'KernelStack: 6208 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.371 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.372 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8104712 kB' 'MemAvailable: 9484648 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 448984 kB' 'Inactive: 1265392 kB' 'Active(anon): 128140 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119276 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134756 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73480 kB' 'KernelStack: 6208 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.373 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.374 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.375 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.376 nr_hugepages=1024 00:03:38.376 resv_hugepages=0 00:03:38.376 surplus_hugepages=0 00:03:38.376 anon_hugepages=0 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8104712 kB' 'MemAvailable: 9484648 kB' 'Buffers: 2436 kB' 'Cached: 1594264 kB' 'SwapCached: 0 kB' 'Active: 448848 kB' 'Inactive: 1265392 kB' 'Active(anon): 128004 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119192 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 61276 kB' 'Slab: 134756 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73480 kB' 'KernelStack: 6224 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 328316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.376 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.377 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.638 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8104460 kB' 'MemUsed: 4137504 kB' 'SwapCached: 0 kB' 'Active: 448780 kB' 'Inactive: 1265392 kB' 'Active(anon): 127936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1596700 kB' 'Mapped: 47984 kB' 'AnonPages: 119072 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61276 kB' 'Slab: 134756 kB' 'SReclaimable: 61276 kB' 'SUnreclaim: 73480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.639 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.640 node0=1024 expecting 1024 00:03:38.640 ************************************ 00:03:38.640 END TEST no_shrink_alloc 00:03:38.640 ************************************ 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:38.640 00:03:38.640 real 0m1.359s 00:03:38.640 user 0m0.607s 00:03:38.640 sys 0m0.783s 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.640 20:40:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:38.640 20:40:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:38.640 20:40:00 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:38.640 20:40:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:38.640 20:40:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.640 20:40:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.640 20:40:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.640 20:40:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.640 20:40:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.640 20:40:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:38.640 20:40:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:38.640 00:03:38.640 real 0m6.138s 00:03:38.640 user 0m2.681s 00:03:38.640 sys 0m3.622s 00:03:38.640 20:40:00 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.640 ************************************ 00:03:38.640 END TEST hugepages 00:03:38.640 ************************************ 00:03:38.640 20:40:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.640 20:40:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:38.640 20:40:00 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:38.640 20:40:00 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.640 20:40:00 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.640 20:40:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.640 ************************************ 00:03:38.640 START TEST driver 00:03:38.640 ************************************ 00:03:38.640 20:40:00 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:38.640 * Looking for test storage... 00:03:38.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:38.640 20:40:00 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:38.640 20:40:00 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.640 20:40:00 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.577 20:40:01 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:39.577 20:40:01 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.577 20:40:01 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.577 20:40:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:39.577 ************************************ 00:03:39.577 START TEST guess_driver 00:03:39.577 ************************************ 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:39.577 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:39.577 Looking for driver=uio_pci_generic 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.577 20:40:01 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.514 20:40:02 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.449 00:03:41.449 real 0m1.844s 00:03:41.449 user 0m0.624s 00:03:41.449 sys 0m1.276s 00:03:41.449 20:40:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.449 ************************************ 00:03:41.449 END TEST guess_driver 00:03:41.449 20:40:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:41.449 ************************************ 00:03:41.449 20:40:03 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:41.449 ************************************ 00:03:41.449 END TEST driver 00:03:41.449 ************************************ 00:03:41.449 00:03:41.449 real 0m2.792s 00:03:41.449 user 0m0.939s 00:03:41.449 sys 0m1.981s 00:03:41.449 20:40:03 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.449 20:40:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:41.449 20:40:03 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:41.449 20:40:03 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:41.449 20:40:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.449 20:40:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.449 20:40:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.449 ************************************ 00:03:41.449 START TEST devices 00:03:41.449 ************************************ 00:03:41.449 20:40:03 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:41.709 * Looking for test storage... 00:03:41.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:41.709 20:40:03 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:41.709 20:40:03 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:41.709 20:40:03 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.709 20:40:03 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:42.647 20:40:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:42.647 20:40:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:42.647 20:40:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:42.647 No valid GPT data, bailing 00:03:42.647 20:40:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.647 20:40:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:42.647 20:40:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:42.647 20:40:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:42.647 20:40:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:42.647 20:40:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:42.647 20:40:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:42.647 20:40:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:42.647 No valid GPT data, bailing 00:03:42.647 20:40:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:42.647 20:40:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:42.647 20:40:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:42.647 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:42.647 20:40:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:42.647 20:40:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:42.647 20:40:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:42.648 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:42.648 20:40:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:42.648 20:40:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:42.648 20:40:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:42.648 20:40:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:42.648 20:40:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:42.648 20:40:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:42.648 20:40:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:42.648 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:42.648 20:40:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:42.648 20:40:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:42.648 No valid GPT data, bailing 00:03:42.648 20:40:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:42.908 20:40:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:42.908 20:40:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:42.908 20:40:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:42.908 20:40:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:42.908 20:40:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:42.908 20:40:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:42.908 20:40:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:42.908 No valid GPT data, bailing 00:03:42.908 20:40:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:42.908 20:40:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:42.908 20:40:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:42.908 20:40:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:42.908 20:40:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:42.908 20:40:04 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:42.908 20:40:04 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:42.908 20:40:04 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.908 20:40:04 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.908 20:40:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:42.908 ************************************ 00:03:42.908 START TEST nvme_mount 00:03:42.908 ************************************ 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:42.908 20:40:04 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:43.845 Creating new GPT entries in memory. 00:03:43.845 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:43.845 other utilities. 00:03:43.845 20:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:43.845 20:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:43.845 20:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:43.845 20:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:43.845 20:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:45.221 Creating new GPT entries in memory. 00:03:45.221 The operation has completed successfully. 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57027 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.221 20:40:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:45.221 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.221 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:45.221 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:45.221 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.221 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.221 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.480 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.480 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.480 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.480 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:45.739 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.739 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:45.999 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:45.999 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:45.999 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:45.999 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.999 20:40:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:46.259 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:46.259 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:46.259 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:46.259 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.259 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:46.259 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.518 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:46.518 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.777 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:46.777 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.777 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.778 20:40:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:47.037 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.037 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:47.037 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:47.037 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.037 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.037 20:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.296 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.296 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.296 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.296 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.554 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:47.554 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:47.554 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:47.554 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:47.554 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:47.554 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:47.554 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:47.554 20:40:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:47.554 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:47.554 00:03:47.554 real 0m4.642s 00:03:47.554 user 0m0.898s 00:03:47.554 sys 0m1.498s 00:03:47.554 20:40:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.554 ************************************ 00:03:47.554 20:40:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:47.554 END TEST nvme_mount 00:03:47.554 ************************************ 00:03:47.554 20:40:09 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:47.554 20:40:09 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:47.554 20:40:09 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.554 20:40:09 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.554 20:40:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:47.554 ************************************ 00:03:47.554 START TEST dm_mount 00:03:47.554 ************************************ 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:47.554 20:40:09 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:48.931 Creating new GPT entries in memory. 00:03:48.931 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:48.931 other utilities. 00:03:48.931 20:40:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:48.931 20:40:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.931 20:40:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:48.931 20:40:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:48.931 20:40:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:49.866 Creating new GPT entries in memory. 00:03:49.866 The operation has completed successfully. 00:03:49.866 20:40:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:49.866 20:40:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:49.866 20:40:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:49.866 20:40:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:49.866 20:40:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:50.802 The operation has completed successfully. 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57472 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.802 20:40:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.060 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.060 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:51.060 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:51.060 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.060 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.060 20:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.328 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.328 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.328 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.328 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.603 20:40:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.861 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.861 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:51.861 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:51.861 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.861 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:51.861 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.119 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.119 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.119 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:52.119 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.119 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.119 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:52.119 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:52.119 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:52.119 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:52.119 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.119 20:40:13 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:52.379 20:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.379 20:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:52.379 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:52.379 20:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.379 20:40:14 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:52.379 00:03:52.379 real 0m4.692s 00:03:52.379 user 0m0.604s 00:03:52.379 sys 0m1.054s 00:03:52.379 20:40:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.379 ************************************ 00:03:52.379 END TEST dm_mount 00:03:52.379 ************************************ 00:03:52.379 20:40:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:52.379 20:40:14 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:52.379 20:40:14 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:52.379 20:40:14 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:52.379 20:40:14 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.379 20:40:14 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.379 20:40:14 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:52.379 20:40:14 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.379 20:40:14 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:52.637 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:52.637 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:52.637 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:52.637 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:52.637 20:40:14 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:52.637 20:40:14 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:52.637 20:40:14 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.637 20:40:14 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.637 20:40:14 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.637 20:40:14 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.637 20:40:14 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:52.637 00:03:52.637 real 0m11.147s 00:03:52.637 user 0m2.187s 00:03:52.637 sys 0m3.388s 00:03:52.637 20:40:14 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.637 20:40:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:52.637 ************************************ 00:03:52.637 END TEST devices 00:03:52.637 ************************************ 00:03:52.637 20:40:14 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:52.637 00:03:52.637 real 0m26.619s 00:03:52.637 user 0m8.362s 00:03:52.637 sys 0m13.037s 00:03:52.637 20:40:14 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.637 20:40:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.637 ************************************ 00:03:52.637 END TEST setup.sh 00:03:52.637 ************************************ 00:03:52.894 20:40:14 -- common/autotest_common.sh@1142 -- # return 0 00:03:52.894 20:40:14 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:53.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.462 Hugepages 00:03:53.462 node hugesize free / total 00:03:53.462 node0 1048576kB 0 / 0 00:03:53.721 node0 2048kB 2048 / 2048 00:03:53.721 00:03:53.721 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.721 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:53.721 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:53.979 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:53.979 20:40:15 -- spdk/autotest.sh@130 -- # uname -s 00:03:53.979 20:40:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:53.979 20:40:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:53.979 20:40:15 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.916 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.916 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.916 20:40:16 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:56.292 20:40:17 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:56.292 20:40:17 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:56.292 20:40:17 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:56.292 20:40:17 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:56.292 20:40:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:56.292 20:40:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:56.292 20:40:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.292 20:40:17 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:56.292 20:40:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:56.292 20:40:17 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:03:56.292 20:40:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:56.292 20:40:17 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.550 Waiting for block devices as requested 00:03:56.550 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:56.808 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:56.809 20:40:18 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:56.809 20:40:18 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:56.809 20:40:18 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:56.809 20:40:18 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:03:56.809 20:40:18 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:56.809 20:40:18 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:56.809 20:40:18 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:56.809 20:40:18 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:03:56.809 20:40:18 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:03:56.809 20:40:18 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:03:56.809 20:40:18 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:03:56.809 20:40:18 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:56.809 20:40:18 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:56.809 20:40:18 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:56.809 20:40:18 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:56.809 20:40:18 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:56.809 20:40:18 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:03:56.809 20:40:18 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:56.809 20:40:18 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:56.809 20:40:18 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:56.809 20:40:18 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:56.809 20:40:18 -- common/autotest_common.sh@1557 -- # continue 00:03:56.809 20:40:18 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:56.809 20:40:18 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:56.809 20:40:18 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:56.809 20:40:18 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:03:56.809 20:40:18 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:56.809 20:40:18 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:56.809 20:40:18 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:56.809 20:40:18 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:56.809 20:40:18 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:56.809 20:40:18 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:56.809 20:40:18 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:56.809 20:40:18 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:56.809 20:40:18 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:56.809 20:40:18 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:56.809 20:40:18 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:56.809 20:40:18 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:56.809 20:40:18 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:56.809 20:40:18 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:56.809 20:40:18 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:56.809 20:40:18 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:56.809 20:40:18 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:56.809 20:40:18 -- common/autotest_common.sh@1557 -- # continue 00:03:56.809 20:40:18 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:56.809 20:40:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:56.809 20:40:18 -- common/autotest_common.sh@10 -- # set +x 00:03:57.068 20:40:18 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:57.068 20:40:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:57.068 20:40:18 -- common/autotest_common.sh@10 -- # set +x 00:03:57.068 20:40:18 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.036 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.036 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.036 20:40:19 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:58.036 20:40:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:58.036 20:40:19 -- common/autotest_common.sh@10 -- # set +x 00:03:58.036 20:40:19 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:58.036 20:40:19 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:58.036 20:40:19 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:58.036 20:40:19 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:58.036 20:40:19 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:58.036 20:40:19 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:58.036 20:40:19 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:58.036 20:40:19 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:58.036 20:40:19 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:58.036 20:40:19 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:58.037 20:40:19 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:58.304 20:40:19 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:03:58.304 20:40:19 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:58.304 20:40:19 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:58.304 20:40:19 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:58.304 20:40:19 -- common/autotest_common.sh@1580 -- # device=0x0010 00:03:58.304 20:40:19 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:58.304 20:40:19 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:58.304 20:40:19 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:58.304 20:40:19 -- common/autotest_common.sh@1580 -- # device=0x0010 00:03:58.304 20:40:19 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:58.304 20:40:19 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:03:58.304 20:40:19 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:03:58.304 20:40:19 -- common/autotest_common.sh@1593 -- # return 0 00:03:58.304 20:40:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:58.304 20:40:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:58.304 20:40:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:58.304 20:40:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:58.304 20:40:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:58.304 20:40:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.304 20:40:19 -- common/autotest_common.sh@10 -- # set +x 00:03:58.304 20:40:19 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:03:58.304 20:40:19 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:03:58.304 20:40:19 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:03:58.304 20:40:19 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:58.304 20:40:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.304 20:40:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.304 20:40:19 -- common/autotest_common.sh@10 -- # set +x 00:03:58.304 ************************************ 00:03:58.304 START TEST env 00:03:58.304 ************************************ 00:03:58.304 20:40:19 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:58.304 * Looking for test storage... 00:03:58.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:58.304 20:40:20 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:58.304 20:40:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.304 20:40:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.304 20:40:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.304 ************************************ 00:03:58.304 START TEST env_memory 00:03:58.304 ************************************ 00:03:58.304 20:40:20 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:58.304 00:03:58.304 00:03:58.304 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.304 http://cunit.sourceforge.net/ 00:03:58.304 00:03:58.304 00:03:58.304 Suite: memory 00:03:58.304 Test: alloc and free memory map ...[2024-07-15 20:40:20.166676] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:58.304 passed 00:03:58.304 Test: mem map translation ...[2024-07-15 20:40:20.187300] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:58.304 [2024-07-15 20:40:20.187341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:58.304 [2024-07-15 20:40:20.187379] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:58.304 [2024-07-15 20:40:20.187388] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:58.562 passed 00:03:58.562 Test: mem map registration ...[2024-07-15 20:40:20.225209] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:58.562 [2024-07-15 20:40:20.225246] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:58.562 passed 00:03:58.562 Test: mem map adjacent registrations ...passed 00:03:58.562 00:03:58.562 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.562 suites 1 1 n/a 0 0 00:03:58.562 tests 4 4 4 0 0 00:03:58.562 asserts 152 152 152 0 n/a 00:03:58.562 00:03:58.562 Elapsed time = 0.137 seconds 00:03:58.562 00:03:58.562 real 0m0.157s 00:03:58.562 user 0m0.141s 00:03:58.562 sys 0m0.013s 00:03:58.562 20:40:20 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.562 20:40:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:58.562 ************************************ 00:03:58.562 END TEST env_memory 00:03:58.562 ************************************ 00:03:58.562 20:40:20 env -- common/autotest_common.sh@1142 -- # return 0 00:03:58.562 20:40:20 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:58.562 20:40:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.562 20:40:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.562 20:40:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.562 ************************************ 00:03:58.562 START TEST env_vtophys 00:03:58.562 ************************************ 00:03:58.562 20:40:20 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:58.562 EAL: lib.eal log level changed from notice to debug 00:03:58.562 EAL: Detected lcore 0 as core 0 on socket 0 00:03:58.562 EAL: Detected lcore 1 as core 0 on socket 0 00:03:58.562 EAL: Detected lcore 2 as core 0 on socket 0 00:03:58.562 EAL: Detected lcore 3 as core 0 on socket 0 00:03:58.562 EAL: Detected lcore 4 as core 0 on socket 0 00:03:58.562 EAL: Detected lcore 5 as core 0 on socket 0 00:03:58.562 EAL: Detected lcore 6 as core 0 on socket 0 00:03:58.562 EAL: Detected lcore 7 as core 0 on socket 0 00:03:58.562 EAL: Detected lcore 8 as core 0 on socket 0 00:03:58.562 EAL: Detected lcore 9 as core 0 on socket 0 00:03:58.562 EAL: Maximum logical cores by configuration: 128 00:03:58.562 EAL: Detected CPU lcores: 10 00:03:58.562 EAL: Detected NUMA nodes: 1 00:03:58.562 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:58.562 EAL: Detected shared linkage of DPDK 00:03:58.562 EAL: No shared files mode enabled, IPC will be disabled 00:03:58.562 EAL: Selected IOVA mode 'PA' 00:03:58.562 EAL: Probing VFIO support... 00:03:58.562 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:58.562 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:58.562 EAL: Ask a virtual area of 0x2e000 bytes 00:03:58.562 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:58.562 EAL: Setting up physically contiguous memory... 00:03:58.562 EAL: Setting maximum number of open files to 524288 00:03:58.562 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:58.562 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:58.563 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.563 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:58.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.563 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.563 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:58.563 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:58.563 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.563 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:58.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.563 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.563 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:58.563 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:58.563 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.563 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:58.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.563 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.563 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:58.563 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:58.563 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.563 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:58.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.563 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.563 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:58.563 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:58.563 EAL: Hugepages will be freed exactly as allocated. 00:03:58.563 EAL: No shared files mode enabled, IPC is disabled 00:03:58.563 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: TSC frequency is ~2490000 KHz 00:03:58.821 EAL: Main lcore 0 is ready (tid=7f6f24c0ea00;cpuset=[0]) 00:03:58.821 EAL: Trying to obtain current memory policy. 00:03:58.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.821 EAL: Restoring previous memory policy: 0 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was expanded by 2MB 00:03:58.821 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:58.821 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:58.821 EAL: Mem event callback 'spdk:(nil)' registered 00:03:58.821 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:58.821 00:03:58.821 00:03:58.821 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.821 http://cunit.sourceforge.net/ 00:03:58.821 00:03:58.821 00:03:58.821 Suite: components_suite 00:03:58.821 Test: vtophys_malloc_test ...passed 00:03:58.821 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:58.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.821 EAL: Restoring previous memory policy: 4 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was expanded by 4MB 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was shrunk by 4MB 00:03:58.821 EAL: Trying to obtain current memory policy. 00:03:58.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.821 EAL: Restoring previous memory policy: 4 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was expanded by 6MB 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was shrunk by 6MB 00:03:58.821 EAL: Trying to obtain current memory policy. 00:03:58.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.821 EAL: Restoring previous memory policy: 4 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was expanded by 10MB 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was shrunk by 10MB 00:03:58.821 EAL: Trying to obtain current memory policy. 00:03:58.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.821 EAL: Restoring previous memory policy: 4 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was expanded by 18MB 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was shrunk by 18MB 00:03:58.821 EAL: Trying to obtain current memory policy. 00:03:58.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.821 EAL: Restoring previous memory policy: 4 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was expanded by 34MB 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was shrunk by 34MB 00:03:58.821 EAL: Trying to obtain current memory policy. 00:03:58.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.821 EAL: Restoring previous memory policy: 4 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was expanded by 66MB 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was shrunk by 66MB 00:03:58.821 EAL: Trying to obtain current memory policy. 00:03:58.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.821 EAL: Restoring previous memory policy: 4 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was expanded by 130MB 00:03:58.821 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.821 EAL: request: mp_malloc_sync 00:03:58.821 EAL: No shared files mode enabled, IPC is disabled 00:03:58.821 EAL: Heap on socket 0 was shrunk by 130MB 00:03:58.821 EAL: Trying to obtain current memory policy. 00:03:58.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.821 EAL: Restoring previous memory policy: 4 00:03:58.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.822 EAL: request: mp_malloc_sync 00:03:58.822 EAL: No shared files mode enabled, IPC is disabled 00:03:58.822 EAL: Heap on socket 0 was expanded by 258MB 00:03:58.822 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.080 EAL: request: mp_malloc_sync 00:03:59.080 EAL: No shared files mode enabled, IPC is disabled 00:03:59.080 EAL: Heap on socket 0 was shrunk by 258MB 00:03:59.080 EAL: Trying to obtain current memory policy. 00:03:59.080 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.080 EAL: Restoring previous memory policy: 4 00:03:59.080 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.080 EAL: request: mp_malloc_sync 00:03:59.080 EAL: No shared files mode enabled, IPC is disabled 00:03:59.080 EAL: Heap on socket 0 was expanded by 514MB 00:03:59.080 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.338 EAL: request: mp_malloc_sync 00:03:59.338 EAL: No shared files mode enabled, IPC is disabled 00:03:59.338 EAL: Heap on socket 0 was shrunk by 514MB 00:03:59.338 EAL: Trying to obtain current memory policy. 00:03:59.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.338 EAL: Restoring previous memory policy: 4 00:03:59.338 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.338 EAL: request: mp_malloc_sync 00:03:59.338 EAL: No shared files mode enabled, IPC is disabled 00:03:59.338 EAL: Heap on socket 0 was expanded by 1026MB 00:03:59.597 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.855 passed 00:03:59.855 00:03:59.855 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.855 suites 1 1 n/a 0 0 00:03:59.855 tests 2 2 2 0 0 00:03:59.855 asserts 5281 5281 5281 0 n/a 00:03:59.855 00:03:59.855 Elapsed time = 0.979 seconds 00:03:59.855 EAL: request: mp_malloc_sync 00:03:59.855 EAL: No shared files mode enabled, IPC is disabled 00:03:59.855 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:59.855 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.855 EAL: request: mp_malloc_sync 00:03:59.855 EAL: No shared files mode enabled, IPC is disabled 00:03:59.855 EAL: Heap on socket 0 was shrunk by 2MB 00:03:59.855 EAL: No shared files mode enabled, IPC is disabled 00:03:59.855 EAL: No shared files mode enabled, IPC is disabled 00:03:59.855 EAL: No shared files mode enabled, IPC is disabled 00:03:59.855 00:03:59.855 real 0m1.179s 00:03:59.855 user 0m0.638s 00:03:59.855 sys 0m0.416s 00:03:59.855 20:40:21 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.855 20:40:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:59.855 ************************************ 00:03:59.855 END TEST env_vtophys 00:03:59.855 ************************************ 00:03:59.855 20:40:21 env -- common/autotest_common.sh@1142 -- # return 0 00:03:59.855 20:40:21 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:59.855 20:40:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.855 20:40:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.855 20:40:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.855 ************************************ 00:03:59.855 START TEST env_pci 00:03:59.855 ************************************ 00:03:59.855 20:40:21 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:59.855 00:03:59.855 00:03:59.855 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.855 http://cunit.sourceforge.net/ 00:03:59.855 00:03:59.855 00:03:59.855 Suite: pci 00:03:59.855 Test: pci_hook ...[2024-07-15 20:40:21.610705] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58670 has claimed it 00:03:59.856 passed 00:03:59.856 00:03:59.856 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.856 suites 1 1 n/a 0 0 00:03:59.856 tests 1 1 1 0 0 00:03:59.856 asserts 25 25 25 0 n/a 00:03:59.856 00:03:59.856 Elapsed time = 0.003 seconds 00:03:59.856 EAL: Cannot find device (10000:00:01.0) 00:03:59.856 EAL: Failed to attach device on primary process 00:03:59.856 00:03:59.856 real 0m0.028s 00:03:59.856 user 0m0.016s 00:03:59.856 sys 0m0.012s 00:03:59.856 20:40:21 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.856 20:40:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:59.856 ************************************ 00:03:59.856 END TEST env_pci 00:03:59.856 ************************************ 00:03:59.856 20:40:21 env -- common/autotest_common.sh@1142 -- # return 0 00:03:59.856 20:40:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:59.856 20:40:21 env -- env/env.sh@15 -- # uname 00:03:59.856 20:40:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:59.856 20:40:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:59.856 20:40:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.856 20:40:21 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:59.856 20:40:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.856 20:40:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.856 ************************************ 00:03:59.856 START TEST env_dpdk_post_init 00:03:59.856 ************************************ 00:03:59.856 20:40:21 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.856 EAL: Detected CPU lcores: 10 00:03:59.856 EAL: Detected NUMA nodes: 1 00:03:59.856 EAL: Detected shared linkage of DPDK 00:03:59.856 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.856 EAL: Selected IOVA mode 'PA' 00:04:00.113 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.113 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:00.113 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:00.113 Starting DPDK initialization... 00:04:00.113 Starting SPDK post initialization... 00:04:00.113 SPDK NVMe probe 00:04:00.113 Attaching to 0000:00:10.0 00:04:00.113 Attaching to 0000:00:11.0 00:04:00.113 Attached to 0000:00:10.0 00:04:00.113 Attached to 0000:00:11.0 00:04:00.113 Cleaning up... 00:04:00.113 00:04:00.113 real 0m0.191s 00:04:00.113 user 0m0.055s 00:04:00.113 sys 0m0.035s 00:04:00.113 20:40:21 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.113 ************************************ 00:04:00.113 END TEST env_dpdk_post_init 00:04:00.113 ************************************ 00:04:00.113 20:40:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:00.113 20:40:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:00.113 20:40:21 env -- env/env.sh@26 -- # uname 00:04:00.113 20:40:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:00.113 20:40:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.113 20:40:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.113 20:40:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.113 20:40:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.113 ************************************ 00:04:00.113 START TEST env_mem_callbacks 00:04:00.113 ************************************ 00:04:00.113 20:40:21 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.113 EAL: Detected CPU lcores: 10 00:04:00.113 EAL: Detected NUMA nodes: 1 00:04:00.113 EAL: Detected shared linkage of DPDK 00:04:00.113 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.113 EAL: Selected IOVA mode 'PA' 00:04:00.370 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.370 00:04:00.370 00:04:00.370 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.370 http://cunit.sourceforge.net/ 00:04:00.370 00:04:00.370 00:04:00.370 Suite: memory 00:04:00.370 Test: test ... 00:04:00.370 register 0x200000200000 2097152 00:04:00.370 malloc 3145728 00:04:00.370 register 0x200000400000 4194304 00:04:00.370 buf 0x200000500000 len 3145728 PASSED 00:04:00.370 malloc 64 00:04:00.370 buf 0x2000004fff40 len 64 PASSED 00:04:00.370 malloc 4194304 00:04:00.370 register 0x200000800000 6291456 00:04:00.370 buf 0x200000a00000 len 4194304 PASSED 00:04:00.370 free 0x200000500000 3145728 00:04:00.370 free 0x2000004fff40 64 00:04:00.370 unregister 0x200000400000 4194304 PASSED 00:04:00.370 free 0x200000a00000 4194304 00:04:00.370 unregister 0x200000800000 6291456 PASSED 00:04:00.370 malloc 8388608 00:04:00.370 register 0x200000400000 10485760 00:04:00.370 buf 0x200000600000 len 8388608 PASSED 00:04:00.370 free 0x200000600000 8388608 00:04:00.370 unregister 0x200000400000 10485760 PASSED 00:04:00.370 passed 00:04:00.370 00:04:00.370 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.370 suites 1 1 n/a 0 0 00:04:00.370 tests 1 1 1 0 0 00:04:00.370 asserts 15 15 15 0 n/a 00:04:00.370 00:04:00.370 Elapsed time = 0.009 seconds 00:04:00.370 00:04:00.370 real 0m0.144s 00:04:00.370 user 0m0.017s 00:04:00.370 sys 0m0.025s 00:04:00.370 20:40:22 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.370 20:40:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:00.370 ************************************ 00:04:00.370 END TEST env_mem_callbacks 00:04:00.370 ************************************ 00:04:00.370 20:40:22 env -- common/autotest_common.sh@1142 -- # return 0 00:04:00.370 ************************************ 00:04:00.370 END TEST env 00:04:00.370 ************************************ 00:04:00.370 00:04:00.370 real 0m2.153s 00:04:00.370 user 0m1.026s 00:04:00.370 sys 0m0.799s 00:04:00.370 20:40:22 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.370 20:40:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.370 20:40:22 -- common/autotest_common.sh@1142 -- # return 0 00:04:00.370 20:40:22 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:00.370 20:40:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.370 20:40:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.370 20:40:22 -- common/autotest_common.sh@10 -- # set +x 00:04:00.370 ************************************ 00:04:00.370 START TEST rpc 00:04:00.370 ************************************ 00:04:00.370 20:40:22 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:00.627 * Looking for test storage... 00:04:00.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:00.627 20:40:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58781 00:04:00.627 20:40:22 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:00.627 20:40:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.627 20:40:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58781 00:04:00.627 20:40:22 rpc -- common/autotest_common.sh@829 -- # '[' -z 58781 ']' 00:04:00.627 20:40:22 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.627 20:40:22 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:00.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.627 20:40:22 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.627 20:40:22 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:00.627 20:40:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.627 [2024-07-15 20:40:22.396094] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:00.627 [2024-07-15 20:40:22.396180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58781 ] 00:04:00.884 [2024-07-15 20:40:22.538154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.884 [2024-07-15 20:40:22.629387] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:00.884 [2024-07-15 20:40:22.629427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58781' to capture a snapshot of events at runtime. 00:04:00.884 [2024-07-15 20:40:22.629436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:00.884 [2024-07-15 20:40:22.629445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:00.884 [2024-07-15 20:40:22.629452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58781 for offline analysis/debug. 00:04:00.884 [2024-07-15 20:40:22.629477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.884 [2024-07-15 20:40:22.671631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:01.447 20:40:23 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:01.447 20:40:23 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:01.447 20:40:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.447 20:40:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.447 20:40:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:01.447 20:40:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:01.447 20:40:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.447 20:40:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.447 20:40:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.447 ************************************ 00:04:01.447 START TEST rpc_integrity 00:04:01.447 ************************************ 00:04:01.447 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:01.447 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.447 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.447 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.447 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.447 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.447 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:01.447 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.447 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.447 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.447 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.447 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.447 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:01.447 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.447 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.447 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.447 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.447 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.447 { 00:04:01.447 "name": "Malloc0", 00:04:01.447 "aliases": [ 00:04:01.447 "da54ef83-c463-461a-8303-29cd8b3bc734" 00:04:01.447 ], 00:04:01.447 "product_name": "Malloc disk", 00:04:01.447 "block_size": 512, 00:04:01.447 "num_blocks": 16384, 00:04:01.447 "uuid": "da54ef83-c463-461a-8303-29cd8b3bc734", 00:04:01.447 "assigned_rate_limits": { 00:04:01.447 "rw_ios_per_sec": 0, 00:04:01.447 "rw_mbytes_per_sec": 0, 00:04:01.447 "r_mbytes_per_sec": 0, 00:04:01.447 "w_mbytes_per_sec": 0 00:04:01.447 }, 00:04:01.447 "claimed": false, 00:04:01.447 "zoned": false, 00:04:01.447 "supported_io_types": { 00:04:01.447 "read": true, 00:04:01.447 "write": true, 00:04:01.447 "unmap": true, 00:04:01.447 "flush": true, 00:04:01.447 "reset": true, 00:04:01.447 "nvme_admin": false, 00:04:01.447 "nvme_io": false, 00:04:01.447 "nvme_io_md": false, 00:04:01.447 "write_zeroes": true, 00:04:01.447 "zcopy": true, 00:04:01.447 "get_zone_info": false, 00:04:01.447 "zone_management": false, 00:04:01.447 "zone_append": false, 00:04:01.447 "compare": false, 00:04:01.447 "compare_and_write": false, 00:04:01.447 "abort": true, 00:04:01.447 "seek_hole": false, 00:04:01.447 "seek_data": false, 00:04:01.447 "copy": true, 00:04:01.447 "nvme_iov_md": false 00:04:01.447 }, 00:04:01.447 "memory_domains": [ 00:04:01.447 { 00:04:01.447 "dma_device_id": "system", 00:04:01.447 "dma_device_type": 1 00:04:01.447 }, 00:04:01.447 { 00:04:01.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.447 "dma_device_type": 2 00:04:01.447 } 00:04:01.447 ], 00:04:01.447 "driver_specific": {} 00:04:01.447 } 00:04:01.447 ]' 00:04:01.447 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:01.704 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.704 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:01.704 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.704 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.704 [2024-07-15 20:40:23.374896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:01.704 [2024-07-15 20:40:23.374935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.704 [2024-07-15 20:40:23.374950] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd5cda0 00:04:01.704 [2024-07-15 20:40:23.374958] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.704 [2024-07-15 20:40:23.376102] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.704 [2024-07-15 20:40:23.376122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.704 Passthru0 00:04:01.704 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.704 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.704 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.704 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.704 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.704 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.704 { 00:04:01.704 "name": "Malloc0", 00:04:01.704 "aliases": [ 00:04:01.704 "da54ef83-c463-461a-8303-29cd8b3bc734" 00:04:01.704 ], 00:04:01.704 "product_name": "Malloc disk", 00:04:01.704 "block_size": 512, 00:04:01.704 "num_blocks": 16384, 00:04:01.704 "uuid": "da54ef83-c463-461a-8303-29cd8b3bc734", 00:04:01.704 "assigned_rate_limits": { 00:04:01.704 "rw_ios_per_sec": 0, 00:04:01.704 "rw_mbytes_per_sec": 0, 00:04:01.704 "r_mbytes_per_sec": 0, 00:04:01.704 "w_mbytes_per_sec": 0 00:04:01.704 }, 00:04:01.704 "claimed": true, 00:04:01.704 "claim_type": "exclusive_write", 00:04:01.704 "zoned": false, 00:04:01.704 "supported_io_types": { 00:04:01.704 "read": true, 00:04:01.704 "write": true, 00:04:01.704 "unmap": true, 00:04:01.704 "flush": true, 00:04:01.704 "reset": true, 00:04:01.704 "nvme_admin": false, 00:04:01.704 "nvme_io": false, 00:04:01.704 "nvme_io_md": false, 00:04:01.704 "write_zeroes": true, 00:04:01.704 "zcopy": true, 00:04:01.704 "get_zone_info": false, 00:04:01.704 "zone_management": false, 00:04:01.704 "zone_append": false, 00:04:01.704 "compare": false, 00:04:01.704 "compare_and_write": false, 00:04:01.704 "abort": true, 00:04:01.704 "seek_hole": false, 00:04:01.704 "seek_data": false, 00:04:01.704 "copy": true, 00:04:01.704 "nvme_iov_md": false 00:04:01.704 }, 00:04:01.704 "memory_domains": [ 00:04:01.704 { 00:04:01.704 "dma_device_id": "system", 00:04:01.704 "dma_device_type": 1 00:04:01.704 }, 00:04:01.704 { 00:04:01.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.704 "dma_device_type": 2 00:04:01.704 } 00:04:01.704 ], 00:04:01.704 "driver_specific": {} 00:04:01.704 }, 00:04:01.704 { 00:04:01.704 "name": "Passthru0", 00:04:01.704 "aliases": [ 00:04:01.704 "41438351-0431-5de6-8451-bbde93413b1c" 00:04:01.704 ], 00:04:01.704 "product_name": "passthru", 00:04:01.704 "block_size": 512, 00:04:01.704 "num_blocks": 16384, 00:04:01.704 "uuid": "41438351-0431-5de6-8451-bbde93413b1c", 00:04:01.704 "assigned_rate_limits": { 00:04:01.704 "rw_ios_per_sec": 0, 00:04:01.704 "rw_mbytes_per_sec": 0, 00:04:01.704 "r_mbytes_per_sec": 0, 00:04:01.704 "w_mbytes_per_sec": 0 00:04:01.704 }, 00:04:01.704 "claimed": false, 00:04:01.704 "zoned": false, 00:04:01.704 "supported_io_types": { 00:04:01.704 "read": true, 00:04:01.704 "write": true, 00:04:01.704 "unmap": true, 00:04:01.704 "flush": true, 00:04:01.705 "reset": true, 00:04:01.705 "nvme_admin": false, 00:04:01.705 "nvme_io": false, 00:04:01.705 "nvme_io_md": false, 00:04:01.705 "write_zeroes": true, 00:04:01.705 "zcopy": true, 00:04:01.705 "get_zone_info": false, 00:04:01.705 "zone_management": false, 00:04:01.705 "zone_append": false, 00:04:01.705 "compare": false, 00:04:01.705 "compare_and_write": false, 00:04:01.705 "abort": true, 00:04:01.705 "seek_hole": false, 00:04:01.705 "seek_data": false, 00:04:01.705 "copy": true, 00:04:01.705 "nvme_iov_md": false 00:04:01.705 }, 00:04:01.705 "memory_domains": [ 00:04:01.705 { 00:04:01.705 "dma_device_id": "system", 00:04:01.705 "dma_device_type": 1 00:04:01.705 }, 00:04:01.705 { 00:04:01.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.705 "dma_device_type": 2 00:04:01.705 } 00:04:01.705 ], 00:04:01.705 "driver_specific": { 00:04:01.705 "passthru": { 00:04:01.705 "name": "Passthru0", 00:04:01.705 "base_bdev_name": "Malloc0" 00:04:01.705 } 00:04:01.705 } 00:04:01.705 } 00:04:01.705 ]' 00:04:01.705 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:01.705 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.705 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.705 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.705 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.705 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.705 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:01.705 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.705 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.705 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.705 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.705 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.705 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.705 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.705 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.705 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:01.705 ************************************ 00:04:01.705 END TEST rpc_integrity 00:04:01.705 ************************************ 00:04:01.705 20:40:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.705 00:04:01.705 real 0m0.299s 00:04:01.705 user 0m0.167s 00:04:01.705 sys 0m0.061s 00:04:01.705 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.705 20:40:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.705 20:40:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.705 20:40:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:01.705 20:40:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.705 20:40:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.705 20:40:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.705 ************************************ 00:04:01.705 START TEST rpc_plugins 00:04:01.705 ************************************ 00:04:01.705 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:01.705 20:40:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:01.705 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.705 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.961 20:40:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:01.961 20:40:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.961 20:40:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:01.961 { 00:04:01.961 "name": "Malloc1", 00:04:01.961 "aliases": [ 00:04:01.961 "f80d5bfe-5f11-428c-816e-75879ac80271" 00:04:01.961 ], 00:04:01.961 "product_name": "Malloc disk", 00:04:01.961 "block_size": 4096, 00:04:01.961 "num_blocks": 256, 00:04:01.961 "uuid": "f80d5bfe-5f11-428c-816e-75879ac80271", 00:04:01.961 "assigned_rate_limits": { 00:04:01.961 "rw_ios_per_sec": 0, 00:04:01.961 "rw_mbytes_per_sec": 0, 00:04:01.961 "r_mbytes_per_sec": 0, 00:04:01.961 "w_mbytes_per_sec": 0 00:04:01.961 }, 00:04:01.961 "claimed": false, 00:04:01.961 "zoned": false, 00:04:01.961 "supported_io_types": { 00:04:01.961 "read": true, 00:04:01.961 "write": true, 00:04:01.961 "unmap": true, 00:04:01.961 "flush": true, 00:04:01.961 "reset": true, 00:04:01.961 "nvme_admin": false, 00:04:01.961 "nvme_io": false, 00:04:01.961 "nvme_io_md": false, 00:04:01.961 "write_zeroes": true, 00:04:01.961 "zcopy": true, 00:04:01.961 "get_zone_info": false, 00:04:01.961 "zone_management": false, 00:04:01.961 "zone_append": false, 00:04:01.961 "compare": false, 00:04:01.961 "compare_and_write": false, 00:04:01.961 "abort": true, 00:04:01.961 "seek_hole": false, 00:04:01.961 "seek_data": false, 00:04:01.961 "copy": true, 00:04:01.961 "nvme_iov_md": false 00:04:01.961 }, 00:04:01.961 "memory_domains": [ 00:04:01.961 { 00:04:01.961 "dma_device_id": "system", 00:04:01.961 "dma_device_type": 1 00:04:01.961 }, 00:04:01.961 { 00:04:01.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.961 "dma_device_type": 2 00:04:01.961 } 00:04:01.961 ], 00:04:01.961 "driver_specific": {} 00:04:01.961 } 00:04:01.961 ]' 00:04:01.961 20:40:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:01.961 20:40:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:01.961 20:40:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.961 20:40:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.961 20:40:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:01.961 20:40:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:01.961 20:40:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:01.961 00:04:01.961 real 0m0.161s 00:04:01.961 user 0m0.090s 00:04:01.961 sys 0m0.030s 00:04:01.961 ************************************ 00:04:01.961 END TEST rpc_plugins 00:04:01.961 ************************************ 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.961 20:40:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.961 20:40:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.961 20:40:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:01.961 20:40:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.961 20:40:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.961 20:40:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.961 ************************************ 00:04:01.961 START TEST rpc_trace_cmd_test 00:04:01.961 ************************************ 00:04:01.961 20:40:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:01.961 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:01.961 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:01.961 20:40:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.961 20:40:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:01.961 20:40:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.961 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:01.961 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58781", 00:04:01.961 "tpoint_group_mask": "0x8", 00:04:01.961 "iscsi_conn": { 00:04:01.961 "mask": "0x2", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "scsi": { 00:04:01.961 "mask": "0x4", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "bdev": { 00:04:01.961 "mask": "0x8", 00:04:01.961 "tpoint_mask": "0xffffffffffffffff" 00:04:01.961 }, 00:04:01.961 "nvmf_rdma": { 00:04:01.961 "mask": "0x10", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "nvmf_tcp": { 00:04:01.961 "mask": "0x20", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "ftl": { 00:04:01.961 "mask": "0x40", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "blobfs": { 00:04:01.961 "mask": "0x80", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "dsa": { 00:04:01.961 "mask": "0x200", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "thread": { 00:04:01.961 "mask": "0x400", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "nvme_pcie": { 00:04:01.961 "mask": "0x800", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "iaa": { 00:04:01.961 "mask": "0x1000", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "nvme_tcp": { 00:04:01.961 "mask": "0x2000", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "bdev_nvme": { 00:04:01.961 "mask": "0x4000", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 }, 00:04:01.961 "sock": { 00:04:01.961 "mask": "0x8000", 00:04:01.961 "tpoint_mask": "0x0" 00:04:01.961 } 00:04:01.961 }' 00:04:01.961 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:02.219 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:02.219 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:02.219 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:02.219 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:02.219 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:02.219 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:02.219 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:02.219 20:40:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:02.219 20:40:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:02.219 00:04:02.219 real 0m0.197s 00:04:02.219 user 0m0.154s 00:04:02.219 sys 0m0.034s 00:04:02.219 20:40:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.219 ************************************ 00:04:02.219 END TEST rpc_trace_cmd_test 00:04:02.219 ************************************ 00:04:02.219 20:40:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:02.219 20:40:24 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:02.219 20:40:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:02.219 20:40:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:02.219 20:40:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:02.219 20:40:24 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.219 20:40:24 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.219 20:40:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.219 ************************************ 00:04:02.219 START TEST rpc_daemon_integrity 00:04:02.219 ************************************ 00:04:02.219 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:02.219 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.219 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.219 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.219 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.219 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.219 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.477 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.477 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.477 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.477 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.477 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.477 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:02.477 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.477 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.477 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.477 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.477 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.477 { 00:04:02.477 "name": "Malloc2", 00:04:02.477 "aliases": [ 00:04:02.477 "acb886f6-9ca1-4f8a-90d3-2421322c3065" 00:04:02.478 ], 00:04:02.478 "product_name": "Malloc disk", 00:04:02.478 "block_size": 512, 00:04:02.478 "num_blocks": 16384, 00:04:02.478 "uuid": "acb886f6-9ca1-4f8a-90d3-2421322c3065", 00:04:02.478 "assigned_rate_limits": { 00:04:02.478 "rw_ios_per_sec": 0, 00:04:02.478 "rw_mbytes_per_sec": 0, 00:04:02.478 "r_mbytes_per_sec": 0, 00:04:02.478 "w_mbytes_per_sec": 0 00:04:02.478 }, 00:04:02.478 "claimed": false, 00:04:02.478 "zoned": false, 00:04:02.478 "supported_io_types": { 00:04:02.478 "read": true, 00:04:02.478 "write": true, 00:04:02.478 "unmap": true, 00:04:02.478 "flush": true, 00:04:02.478 "reset": true, 00:04:02.478 "nvme_admin": false, 00:04:02.478 "nvme_io": false, 00:04:02.478 "nvme_io_md": false, 00:04:02.478 "write_zeroes": true, 00:04:02.478 "zcopy": true, 00:04:02.478 "get_zone_info": false, 00:04:02.478 "zone_management": false, 00:04:02.478 "zone_append": false, 00:04:02.478 "compare": false, 00:04:02.478 "compare_and_write": false, 00:04:02.478 "abort": true, 00:04:02.478 "seek_hole": false, 00:04:02.478 "seek_data": false, 00:04:02.478 "copy": true, 00:04:02.478 "nvme_iov_md": false 00:04:02.478 }, 00:04:02.478 "memory_domains": [ 00:04:02.478 { 00:04:02.478 "dma_device_id": "system", 00:04:02.478 "dma_device_type": 1 00:04:02.478 }, 00:04:02.478 { 00:04:02.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.478 "dma_device_type": 2 00:04:02.478 } 00:04:02.478 ], 00:04:02.478 "driver_specific": {} 00:04:02.478 } 00:04:02.478 ]' 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.478 [2024-07-15 20:40:24.241842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:02.478 [2024-07-15 20:40:24.241889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.478 [2024-07-15 20:40:24.241905] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdc1be0 00:04:02.478 [2024-07-15 20:40:24.241913] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.478 [2024-07-15 20:40:24.243113] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.478 [2024-07-15 20:40:24.243148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.478 Passthru0 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:02.478 { 00:04:02.478 "name": "Malloc2", 00:04:02.478 "aliases": [ 00:04:02.478 "acb886f6-9ca1-4f8a-90d3-2421322c3065" 00:04:02.478 ], 00:04:02.478 "product_name": "Malloc disk", 00:04:02.478 "block_size": 512, 00:04:02.478 "num_blocks": 16384, 00:04:02.478 "uuid": "acb886f6-9ca1-4f8a-90d3-2421322c3065", 00:04:02.478 "assigned_rate_limits": { 00:04:02.478 "rw_ios_per_sec": 0, 00:04:02.478 "rw_mbytes_per_sec": 0, 00:04:02.478 "r_mbytes_per_sec": 0, 00:04:02.478 "w_mbytes_per_sec": 0 00:04:02.478 }, 00:04:02.478 "claimed": true, 00:04:02.478 "claim_type": "exclusive_write", 00:04:02.478 "zoned": false, 00:04:02.478 "supported_io_types": { 00:04:02.478 "read": true, 00:04:02.478 "write": true, 00:04:02.478 "unmap": true, 00:04:02.478 "flush": true, 00:04:02.478 "reset": true, 00:04:02.478 "nvme_admin": false, 00:04:02.478 "nvme_io": false, 00:04:02.478 "nvme_io_md": false, 00:04:02.478 "write_zeroes": true, 00:04:02.478 "zcopy": true, 00:04:02.478 "get_zone_info": false, 00:04:02.478 "zone_management": false, 00:04:02.478 "zone_append": false, 00:04:02.478 "compare": false, 00:04:02.478 "compare_and_write": false, 00:04:02.478 "abort": true, 00:04:02.478 "seek_hole": false, 00:04:02.478 "seek_data": false, 00:04:02.478 "copy": true, 00:04:02.478 "nvme_iov_md": false 00:04:02.478 }, 00:04:02.478 "memory_domains": [ 00:04:02.478 { 00:04:02.478 "dma_device_id": "system", 00:04:02.478 "dma_device_type": 1 00:04:02.478 }, 00:04:02.478 { 00:04:02.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.478 "dma_device_type": 2 00:04:02.478 } 00:04:02.478 ], 00:04:02.478 "driver_specific": {} 00:04:02.478 }, 00:04:02.478 { 00:04:02.478 "name": "Passthru0", 00:04:02.478 "aliases": [ 00:04:02.478 "1e31385a-bf1e-594f-9170-c4901cd9362d" 00:04:02.478 ], 00:04:02.478 "product_name": "passthru", 00:04:02.478 "block_size": 512, 00:04:02.478 "num_blocks": 16384, 00:04:02.478 "uuid": "1e31385a-bf1e-594f-9170-c4901cd9362d", 00:04:02.478 "assigned_rate_limits": { 00:04:02.478 "rw_ios_per_sec": 0, 00:04:02.478 "rw_mbytes_per_sec": 0, 00:04:02.478 "r_mbytes_per_sec": 0, 00:04:02.478 "w_mbytes_per_sec": 0 00:04:02.478 }, 00:04:02.478 "claimed": false, 00:04:02.478 "zoned": false, 00:04:02.478 "supported_io_types": { 00:04:02.478 "read": true, 00:04:02.478 "write": true, 00:04:02.478 "unmap": true, 00:04:02.478 "flush": true, 00:04:02.478 "reset": true, 00:04:02.478 "nvme_admin": false, 00:04:02.478 "nvme_io": false, 00:04:02.478 "nvme_io_md": false, 00:04:02.478 "write_zeroes": true, 00:04:02.478 "zcopy": true, 00:04:02.478 "get_zone_info": false, 00:04:02.478 "zone_management": false, 00:04:02.478 "zone_append": false, 00:04:02.478 "compare": false, 00:04:02.478 "compare_and_write": false, 00:04:02.478 "abort": true, 00:04:02.478 "seek_hole": false, 00:04:02.478 "seek_data": false, 00:04:02.478 "copy": true, 00:04:02.478 "nvme_iov_md": false 00:04:02.478 }, 00:04:02.478 "memory_domains": [ 00:04:02.478 { 00:04:02.478 "dma_device_id": "system", 00:04:02.478 "dma_device_type": 1 00:04:02.478 }, 00:04:02.478 { 00:04:02.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.478 "dma_device_type": 2 00:04:02.478 } 00:04:02.478 ], 00:04:02.478 "driver_specific": { 00:04:02.478 "passthru": { 00:04:02.478 "name": "Passthru0", 00:04:02.478 "base_bdev_name": "Malloc2" 00:04:02.478 } 00:04:02.478 } 00:04:02.478 } 00:04:02.478 ]' 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:02.478 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:02.737 ************************************ 00:04:02.737 END TEST rpc_daemon_integrity 00:04:02.737 ************************************ 00:04:02.737 20:40:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:02.737 00:04:02.737 real 0m0.330s 00:04:02.737 user 0m0.194s 00:04:02.737 sys 0m0.067s 00:04:02.737 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.737 20:40:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.737 20:40:24 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:02.737 20:40:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:02.737 20:40:24 rpc -- rpc/rpc.sh@84 -- # killprocess 58781 00:04:02.737 20:40:24 rpc -- common/autotest_common.sh@948 -- # '[' -z 58781 ']' 00:04:02.737 20:40:24 rpc -- common/autotest_common.sh@952 -- # kill -0 58781 00:04:02.737 20:40:24 rpc -- common/autotest_common.sh@953 -- # uname 00:04:02.737 20:40:24 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:02.737 20:40:24 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58781 00:04:02.737 killing process with pid 58781 00:04:02.737 20:40:24 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:02.737 20:40:24 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:02.737 20:40:24 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58781' 00:04:02.737 20:40:24 rpc -- common/autotest_common.sh@967 -- # kill 58781 00:04:02.737 20:40:24 rpc -- common/autotest_common.sh@972 -- # wait 58781 00:04:02.996 00:04:02.996 real 0m2.610s 00:04:02.996 user 0m3.179s 00:04:02.996 sys 0m0.771s 00:04:02.996 20:40:24 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.996 20:40:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.996 ************************************ 00:04:02.996 END TEST rpc 00:04:02.996 ************************************ 00:04:02.996 20:40:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:02.996 20:40:24 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:02.996 20:40:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.996 20:40:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.996 20:40:24 -- common/autotest_common.sh@10 -- # set +x 00:04:02.996 ************************************ 00:04:02.996 START TEST skip_rpc 00:04:02.996 ************************************ 00:04:02.996 20:40:24 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:03.254 * Looking for test storage... 00:04:03.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:03.255 20:40:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:03.255 20:40:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:03.255 20:40:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:03.255 20:40:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.255 20:40:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.255 20:40:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.255 ************************************ 00:04:03.255 START TEST skip_rpc 00:04:03.255 ************************************ 00:04:03.255 20:40:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:03.255 20:40:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58979 00:04:03.255 20:40:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.255 20:40:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:03.255 20:40:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:03.255 [2024-07-15 20:40:25.085233] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:03.255 [2024-07-15 20:40:25.085298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58979 ] 00:04:03.513 [2024-07-15 20:40:25.226707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.513 [2024-07-15 20:40:25.311944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.513 [2024-07-15 20:40:25.352747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58979 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58979 ']' 00:04:08.783 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58979 00:04:08.784 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:08.784 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:08.784 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58979 00:04:08.784 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:08.784 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:08.784 killing process with pid 58979 00:04:08.784 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58979' 00:04:08.784 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58979 00:04:08.784 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58979 00:04:08.784 00:04:08.784 real 0m5.362s 00:04:08.784 user 0m5.024s 00:04:08.784 sys 0m0.250s 00:04:08.784 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.784 20:40:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.784 ************************************ 00:04:08.784 END TEST skip_rpc 00:04:08.784 ************************************ 00:04:08.784 20:40:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:08.784 20:40:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:08.784 20:40:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.784 20:40:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.784 20:40:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.784 ************************************ 00:04:08.784 START TEST skip_rpc_with_json 00:04:08.784 ************************************ 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59060 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59060 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59060 ']' 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:08.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:08.784 20:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.784 [2024-07-15 20:40:30.498839] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:08.784 [2024-07-15 20:40:30.498904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59060 ] 00:04:08.784 [2024-07-15 20:40:30.640376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.043 [2024-07-15 20:40:30.726461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.043 [2024-07-15 20:40:30.767616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.612 [2024-07-15 20:40:31.419132] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:09.612 request: 00:04:09.612 { 00:04:09.612 "trtype": "tcp", 00:04:09.612 "method": "nvmf_get_transports", 00:04:09.612 "req_id": 1 00:04:09.612 } 00:04:09.612 Got JSON-RPC error response 00:04:09.612 response: 00:04:09.612 { 00:04:09.612 "code": -19, 00:04:09.612 "message": "No such device" 00:04:09.612 } 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.612 [2024-07-15 20:40:31.431204] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.612 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.871 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.871 20:40:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:09.871 { 00:04:09.871 "subsystems": [ 00:04:09.871 { 00:04:09.871 "subsystem": "keyring", 00:04:09.871 "config": [] 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "subsystem": "iobuf", 00:04:09.871 "config": [ 00:04:09.871 { 00:04:09.871 "method": "iobuf_set_options", 00:04:09.871 "params": { 00:04:09.871 "small_pool_count": 8192, 00:04:09.871 "large_pool_count": 1024, 00:04:09.871 "small_bufsize": 8192, 00:04:09.871 "large_bufsize": 135168 00:04:09.871 } 00:04:09.871 } 00:04:09.871 ] 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "subsystem": "sock", 00:04:09.871 "config": [ 00:04:09.871 { 00:04:09.871 "method": "sock_set_default_impl", 00:04:09.871 "params": { 00:04:09.871 "impl_name": "uring" 00:04:09.871 } 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "method": "sock_impl_set_options", 00:04:09.871 "params": { 00:04:09.871 "impl_name": "ssl", 00:04:09.871 "recv_buf_size": 4096, 00:04:09.871 "send_buf_size": 4096, 00:04:09.871 "enable_recv_pipe": true, 00:04:09.871 "enable_quickack": false, 00:04:09.871 "enable_placement_id": 0, 00:04:09.871 "enable_zerocopy_send_server": true, 00:04:09.871 "enable_zerocopy_send_client": false, 00:04:09.871 "zerocopy_threshold": 0, 00:04:09.871 "tls_version": 0, 00:04:09.871 "enable_ktls": false 00:04:09.871 } 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "method": "sock_impl_set_options", 00:04:09.871 "params": { 00:04:09.871 "impl_name": "posix", 00:04:09.871 "recv_buf_size": 2097152, 00:04:09.871 "send_buf_size": 2097152, 00:04:09.871 "enable_recv_pipe": true, 00:04:09.871 "enable_quickack": false, 00:04:09.871 "enable_placement_id": 0, 00:04:09.871 "enable_zerocopy_send_server": true, 00:04:09.871 "enable_zerocopy_send_client": false, 00:04:09.871 "zerocopy_threshold": 0, 00:04:09.871 "tls_version": 0, 00:04:09.871 "enable_ktls": false 00:04:09.871 } 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "method": "sock_impl_set_options", 00:04:09.871 "params": { 00:04:09.871 "impl_name": "uring", 00:04:09.871 "recv_buf_size": 2097152, 00:04:09.871 "send_buf_size": 2097152, 00:04:09.871 "enable_recv_pipe": true, 00:04:09.871 "enable_quickack": false, 00:04:09.871 "enable_placement_id": 0, 00:04:09.871 "enable_zerocopy_send_server": false, 00:04:09.871 "enable_zerocopy_send_client": false, 00:04:09.871 "zerocopy_threshold": 0, 00:04:09.871 "tls_version": 0, 00:04:09.871 "enable_ktls": false 00:04:09.871 } 00:04:09.871 } 00:04:09.871 ] 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "subsystem": "vmd", 00:04:09.871 "config": [] 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "subsystem": "accel", 00:04:09.871 "config": [ 00:04:09.871 { 00:04:09.871 "method": "accel_set_options", 00:04:09.871 "params": { 00:04:09.871 "small_cache_size": 128, 00:04:09.871 "large_cache_size": 16, 00:04:09.871 "task_count": 2048, 00:04:09.871 "sequence_count": 2048, 00:04:09.871 "buf_count": 2048 00:04:09.871 } 00:04:09.871 } 00:04:09.871 ] 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "subsystem": "bdev", 00:04:09.871 "config": [ 00:04:09.871 { 00:04:09.871 "method": "bdev_set_options", 00:04:09.871 "params": { 00:04:09.871 "bdev_io_pool_size": 65535, 00:04:09.871 "bdev_io_cache_size": 256, 00:04:09.871 "bdev_auto_examine": true, 00:04:09.871 "iobuf_small_cache_size": 128, 00:04:09.871 "iobuf_large_cache_size": 16 00:04:09.871 } 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "method": "bdev_raid_set_options", 00:04:09.871 "params": { 00:04:09.871 "process_window_size_kb": 1024 00:04:09.871 } 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "method": "bdev_iscsi_set_options", 00:04:09.871 "params": { 00:04:09.871 "timeout_sec": 30 00:04:09.871 } 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "method": "bdev_nvme_set_options", 00:04:09.871 "params": { 00:04:09.871 "action_on_timeout": "none", 00:04:09.871 "timeout_us": 0, 00:04:09.871 "timeout_admin_us": 0, 00:04:09.871 "keep_alive_timeout_ms": 10000, 00:04:09.871 "arbitration_burst": 0, 00:04:09.871 "low_priority_weight": 0, 00:04:09.871 "medium_priority_weight": 0, 00:04:09.871 "high_priority_weight": 0, 00:04:09.871 "nvme_adminq_poll_period_us": 10000, 00:04:09.871 "nvme_ioq_poll_period_us": 0, 00:04:09.871 "io_queue_requests": 0, 00:04:09.871 "delay_cmd_submit": true, 00:04:09.871 "transport_retry_count": 4, 00:04:09.871 "bdev_retry_count": 3, 00:04:09.871 "transport_ack_timeout": 0, 00:04:09.871 "ctrlr_loss_timeout_sec": 0, 00:04:09.871 "reconnect_delay_sec": 0, 00:04:09.871 "fast_io_fail_timeout_sec": 0, 00:04:09.871 "disable_auto_failback": false, 00:04:09.871 "generate_uuids": false, 00:04:09.871 "transport_tos": 0, 00:04:09.871 "nvme_error_stat": false, 00:04:09.871 "rdma_srq_size": 0, 00:04:09.871 "io_path_stat": false, 00:04:09.871 "allow_accel_sequence": false, 00:04:09.871 "rdma_max_cq_size": 0, 00:04:09.871 "rdma_cm_event_timeout_ms": 0, 00:04:09.871 "dhchap_digests": [ 00:04:09.871 "sha256", 00:04:09.871 "sha384", 00:04:09.871 "sha512" 00:04:09.871 ], 00:04:09.871 "dhchap_dhgroups": [ 00:04:09.871 "null", 00:04:09.871 "ffdhe2048", 00:04:09.871 "ffdhe3072", 00:04:09.871 "ffdhe4096", 00:04:09.871 "ffdhe6144", 00:04:09.871 "ffdhe8192" 00:04:09.871 ] 00:04:09.871 } 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "method": "bdev_nvme_set_hotplug", 00:04:09.871 "params": { 00:04:09.871 "period_us": 100000, 00:04:09.871 "enable": false 00:04:09.871 } 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "method": "bdev_wait_for_examine" 00:04:09.871 } 00:04:09.871 ] 00:04:09.871 }, 00:04:09.871 { 00:04:09.871 "subsystem": "scsi", 00:04:09.871 "config": null 00:04:09.872 }, 00:04:09.872 { 00:04:09.872 "subsystem": "scheduler", 00:04:09.872 "config": [ 00:04:09.872 { 00:04:09.872 "method": "framework_set_scheduler", 00:04:09.872 "params": { 00:04:09.872 "name": "static" 00:04:09.872 } 00:04:09.872 } 00:04:09.872 ] 00:04:09.872 }, 00:04:09.872 { 00:04:09.872 "subsystem": "vhost_scsi", 00:04:09.872 "config": [] 00:04:09.872 }, 00:04:09.872 { 00:04:09.872 "subsystem": "vhost_blk", 00:04:09.872 "config": [] 00:04:09.872 }, 00:04:09.872 { 00:04:09.872 "subsystem": "ublk", 00:04:09.872 "config": [] 00:04:09.872 }, 00:04:09.872 { 00:04:09.872 "subsystem": "nbd", 00:04:09.872 "config": [] 00:04:09.872 }, 00:04:09.872 { 00:04:09.872 "subsystem": "nvmf", 00:04:09.872 "config": [ 00:04:09.872 { 00:04:09.872 "method": "nvmf_set_config", 00:04:09.872 "params": { 00:04:09.872 "discovery_filter": "match_any", 00:04:09.872 "admin_cmd_passthru": { 00:04:09.872 "identify_ctrlr": false 00:04:09.872 } 00:04:09.872 } 00:04:09.872 }, 00:04:09.872 { 00:04:09.872 "method": "nvmf_set_max_subsystems", 00:04:09.872 "params": { 00:04:09.872 "max_subsystems": 1024 00:04:09.872 } 00:04:09.872 }, 00:04:09.872 { 00:04:09.872 "method": "nvmf_set_crdt", 00:04:09.872 "params": { 00:04:09.872 "crdt1": 0, 00:04:09.872 "crdt2": 0, 00:04:09.872 "crdt3": 0 00:04:09.872 } 00:04:09.872 }, 00:04:09.872 { 00:04:09.872 "method": "nvmf_create_transport", 00:04:09.872 "params": { 00:04:09.872 "trtype": "TCP", 00:04:09.872 "max_queue_depth": 128, 00:04:09.872 "max_io_qpairs_per_ctrlr": 127, 00:04:09.872 "in_capsule_data_size": 4096, 00:04:09.872 "max_io_size": 131072, 00:04:09.872 "io_unit_size": 131072, 00:04:09.872 "max_aq_depth": 128, 00:04:09.872 "num_shared_buffers": 511, 00:04:09.872 "buf_cache_size": 4294967295, 00:04:09.872 "dif_insert_or_strip": false, 00:04:09.872 "zcopy": false, 00:04:09.872 "c2h_success": true, 00:04:09.872 "sock_priority": 0, 00:04:09.872 "abort_timeout_sec": 1, 00:04:09.872 "ack_timeout": 0, 00:04:09.872 "data_wr_pool_size": 0 00:04:09.872 } 00:04:09.872 } 00:04:09.872 ] 00:04:09.872 }, 00:04:09.872 { 00:04:09.872 "subsystem": "iscsi", 00:04:09.872 "config": [ 00:04:09.872 { 00:04:09.872 "method": "iscsi_set_options", 00:04:09.872 "params": { 00:04:09.872 "node_base": "iqn.2016-06.io.spdk", 00:04:09.872 "max_sessions": 128, 00:04:09.872 "max_connections_per_session": 2, 00:04:09.872 "max_queue_depth": 64, 00:04:09.872 "default_time2wait": 2, 00:04:09.872 "default_time2retain": 20, 00:04:09.872 "first_burst_length": 8192, 00:04:09.872 "immediate_data": true, 00:04:09.872 "allow_duplicated_isid": false, 00:04:09.872 "error_recovery_level": 0, 00:04:09.872 "nop_timeout": 60, 00:04:09.872 "nop_in_interval": 30, 00:04:09.872 "disable_chap": false, 00:04:09.872 "require_chap": false, 00:04:09.872 "mutual_chap": false, 00:04:09.872 "chap_group": 0, 00:04:09.872 "max_large_datain_per_connection": 64, 00:04:09.872 "max_r2t_per_connection": 4, 00:04:09.872 "pdu_pool_size": 36864, 00:04:09.872 "immediate_data_pool_size": 16384, 00:04:09.872 "data_out_pool_size": 2048 00:04:09.872 } 00:04:09.872 } 00:04:09.872 ] 00:04:09.872 } 00:04:09.872 ] 00:04:09.872 } 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59060 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59060 ']' 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59060 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59060 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:09.872 killing process with pid 59060 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59060' 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59060 00:04:09.872 20:40:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59060 00:04:10.131 20:40:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59088 00:04:10.131 20:40:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:10.131 20:40:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:15.395 20:40:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59088 00:04:15.395 20:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59088 ']' 00:04:15.395 20:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59088 00:04:15.395 20:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:15.396 20:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:15.396 20:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59088 00:04:15.396 20:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:15.396 killing process with pid 59088 00:04:15.396 20:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:15.396 20:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59088' 00:04:15.396 20:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59088 00:04:15.396 20:40:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59088 00:04:15.655 20:40:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:15.655 20:40:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:15.655 00:04:15.655 real 0m6.878s 00:04:15.655 user 0m6.643s 00:04:15.655 sys 0m0.567s 00:04:15.655 20:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.655 ************************************ 00:04:15.655 END TEST skip_rpc_with_json 00:04:15.655 ************************************ 00:04:15.655 20:40:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.656 20:40:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:15.656 20:40:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:15.656 20:40:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.656 20:40:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.656 20:40:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.656 ************************************ 00:04:15.656 START TEST skip_rpc_with_delay 00:04:15.656 ************************************ 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:15.656 [2024-07-15 20:40:37.445248] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:15.656 [2024-07-15 20:40:37.445348] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:15.656 00:04:15.656 real 0m0.072s 00:04:15.656 user 0m0.040s 00:04:15.656 sys 0m0.031s 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.656 ************************************ 00:04:15.656 END TEST skip_rpc_with_delay 00:04:15.656 ************************************ 00:04:15.656 20:40:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:15.656 20:40:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:15.656 20:40:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:15.656 20:40:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:15.656 20:40:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:15.656 20:40:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.656 20:40:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.656 20:40:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.656 ************************************ 00:04:15.656 START TEST exit_on_failed_rpc_init 00:04:15.656 ************************************ 00:04:15.656 20:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:15.656 20:40:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59197 00:04:15.656 20:40:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:15.656 20:40:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59197 00:04:15.656 20:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59197 ']' 00:04:15.656 20:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.656 20:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:15.656 20:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.656 20:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:15.656 20:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.914 [2024-07-15 20:40:37.577702] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:15.914 [2024-07-15 20:40:37.577780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59197 ] 00:04:15.914 [2024-07-15 20:40:37.719202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.914 [2024-07-15 20:40:37.818900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.173 [2024-07-15 20:40:37.860796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:16.739 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:16.739 [2024-07-15 20:40:38.570727] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:16.739 [2024-07-15 20:40:38.570801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59215 ] 00:04:16.996 [2024-07-15 20:40:38.712578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.996 [2024-07-15 20:40:38.811783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.996 [2024-07-15 20:40:38.811865] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:16.996 [2024-07-15 20:40:38.811876] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:16.996 [2024-07-15 20:40:38.811884] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59197 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59197 ']' 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59197 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:16.996 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:17.254 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59197 00:04:17.254 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:17.254 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:17.254 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59197' 00:04:17.254 killing process with pid 59197 00:04:17.254 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59197 00:04:17.254 20:40:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59197 00:04:17.514 00:04:17.514 real 0m1.719s 00:04:17.514 user 0m2.026s 00:04:17.514 sys 0m0.365s 00:04:17.514 ************************************ 00:04:17.514 END TEST exit_on_failed_rpc_init 00:04:17.514 ************************************ 00:04:17.514 20:40:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.514 20:40:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.514 20:40:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:17.514 20:40:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.514 00:04:17.514 real 0m14.413s 00:04:17.514 user 0m13.869s 00:04:17.514 sys 0m1.460s 00:04:17.514 20:40:39 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.514 20:40:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.514 ************************************ 00:04:17.514 END TEST skip_rpc 00:04:17.514 ************************************ 00:04:17.514 20:40:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.514 20:40:39 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:17.514 20:40:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.514 20:40:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.514 20:40:39 -- common/autotest_common.sh@10 -- # set +x 00:04:17.514 ************************************ 00:04:17.514 START TEST rpc_client 00:04:17.514 ************************************ 00:04:17.514 20:40:39 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:17.777 * Looking for test storage... 00:04:17.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:17.777 20:40:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:17.777 OK 00:04:17.777 20:40:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:17.777 00:04:17.777 real 0m0.157s 00:04:17.777 user 0m0.065s 00:04:17.777 sys 0m0.103s 00:04:17.777 20:40:39 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.777 20:40:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:17.777 ************************************ 00:04:17.777 END TEST rpc_client 00:04:17.777 ************************************ 00:04:17.777 20:40:39 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.777 20:40:39 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:17.777 20:40:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.777 20:40:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.777 20:40:39 -- common/autotest_common.sh@10 -- # set +x 00:04:17.777 ************************************ 00:04:17.777 START TEST json_config 00:04:17.777 ************************************ 00:04:17.777 20:40:39 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:17.777 20:40:39 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:17.777 20:40:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:17.777 20:40:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:17.777 20:40:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:17.777 20:40:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:17.777 20:40:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:17.777 20:40:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:17.777 20:40:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:17.777 20:40:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:17.777 20:40:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:17.777 20:40:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:17.777 20:40:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:18.034 20:40:39 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.034 20:40:39 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.034 20:40:39 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.034 20:40:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.034 20:40:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.034 20:40:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.034 20:40:39 json_config -- paths/export.sh@5 -- # export PATH 00:04:18.034 20:40:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@47 -- # : 0 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:18.034 20:40:39 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:18.034 20:40:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:18.035 20:40:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:18.035 20:40:39 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:18.035 INFO: JSON configuration test init 00:04:18.035 20:40:39 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:18.035 20:40:39 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:18.035 20:40:39 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:18.035 20:40:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.035 20:40:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.035 20:40:39 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:18.035 20:40:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.035 20:40:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.035 20:40:39 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:18.035 20:40:39 json_config -- json_config/common.sh@9 -- # local app=target 00:04:18.035 20:40:39 json_config -- json_config/common.sh@10 -- # shift 00:04:18.035 20:40:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:18.035 20:40:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:18.035 20:40:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:18.035 20:40:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.035 20:40:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.035 20:40:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59333 00:04:18.035 Waiting for target to run... 00:04:18.035 20:40:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:18.035 20:40:39 json_config -- json_config/common.sh@25 -- # waitforlisten 59333 /var/tmp/spdk_tgt.sock 00:04:18.035 20:40:39 json_config -- common/autotest_common.sh@829 -- # '[' -z 59333 ']' 00:04:18.035 20:40:39 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:18.035 20:40:39 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:18.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:18.035 20:40:39 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:18.035 20:40:39 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:18.035 20:40:39 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:18.035 20:40:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.035 [2024-07-15 20:40:39.781757] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:18.035 [2024-07-15 20:40:39.782328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59333 ] 00:04:18.292 [2024-07-15 20:40:40.143468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.549 [2024-07-15 20:40:40.219327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.806 20:40:40 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.806 20:40:40 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:18.806 00:04:18.806 20:40:40 json_config -- json_config/common.sh@26 -- # echo '' 00:04:18.806 20:40:40 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:18.806 20:40:40 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:18.806 20:40:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.806 20:40:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.806 20:40:40 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:18.806 20:40:40 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:18.806 20:40:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.806 20:40:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.063 20:40:40 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:19.063 20:40:40 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:19.063 20:40:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:19.063 [2024-07-15 20:40:40.969536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:19.321 20:40:41 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:19.321 20:40:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:19.321 20:40:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.321 20:40:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.321 20:40:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:19.321 20:40:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:19.321 20:40:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:19.321 20:40:41 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:19.321 20:40:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:19.321 20:40:41 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:19.579 20:40:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.579 20:40:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:19.579 20:40:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.579 20:40:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:19.579 20:40:41 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:19.579 20:40:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:19.837 MallocForNvmf0 00:04:19.837 20:40:41 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:19.837 20:40:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:20.096 MallocForNvmf1 00:04:20.096 20:40:41 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:20.096 20:40:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:20.355 [2024-07-15 20:40:42.073023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.355 20:40:42 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:20.355 20:40:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:20.614 20:40:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:20.614 20:40:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:20.873 20:40:42 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:20.873 20:40:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:20.873 20:40:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:20.873 20:40:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:21.132 [2024-07-15 20:40:42.904026] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:21.132 20:40:42 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:21.132 20:40:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.132 20:40:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.132 20:40:42 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:21.132 20:40:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.132 20:40:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.132 20:40:43 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:21.132 20:40:43 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:21.132 20:40:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:21.391 MallocBdevForConfigChangeCheck 00:04:21.391 20:40:43 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:21.391 20:40:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.391 20:40:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.391 20:40:43 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:21.391 20:40:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.957 INFO: shutting down applications... 00:04:21.957 20:40:43 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:21.957 20:40:43 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:21.957 20:40:43 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:21.957 20:40:43 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:21.957 20:40:43 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:22.215 Calling clear_iscsi_subsystem 00:04:22.215 Calling clear_nvmf_subsystem 00:04:22.215 Calling clear_nbd_subsystem 00:04:22.216 Calling clear_ublk_subsystem 00:04:22.216 Calling clear_vhost_blk_subsystem 00:04:22.216 Calling clear_vhost_scsi_subsystem 00:04:22.216 Calling clear_bdev_subsystem 00:04:22.216 20:40:43 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:22.216 20:40:43 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:22.216 20:40:43 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:22.216 20:40:43 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.216 20:40:43 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:22.216 20:40:43 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:22.474 20:40:44 json_config -- json_config/json_config.sh@345 -- # break 00:04:22.475 20:40:44 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:22.475 20:40:44 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:22.475 20:40:44 json_config -- json_config/common.sh@31 -- # local app=target 00:04:22.475 20:40:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:22.475 20:40:44 json_config -- json_config/common.sh@35 -- # [[ -n 59333 ]] 00:04:22.475 20:40:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59333 00:04:22.475 20:40:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:22.475 20:40:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.475 20:40:44 json_config -- json_config/common.sh@41 -- # kill -0 59333 00:04:22.475 20:40:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.043 20:40:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.043 20:40:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.043 20:40:44 json_config -- json_config/common.sh@41 -- # kill -0 59333 00:04:23.043 20:40:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:23.043 20:40:44 json_config -- json_config/common.sh@43 -- # break 00:04:23.043 SPDK target shutdown done 00:04:23.043 20:40:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:23.043 20:40:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:23.043 INFO: relaunching applications... 00:04:23.043 20:40:44 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:23.043 20:40:44 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:23.043 20:40:44 json_config -- json_config/common.sh@9 -- # local app=target 00:04:23.043 20:40:44 json_config -- json_config/common.sh@10 -- # shift 00:04:23.043 20:40:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.043 20:40:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.043 20:40:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.043 20:40:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.043 20:40:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.043 20:40:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59518 00:04:23.043 Waiting for target to run... 00:04:23.043 20:40:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.043 20:40:44 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:23.043 20:40:44 json_config -- json_config/common.sh@25 -- # waitforlisten 59518 /var/tmp/spdk_tgt.sock 00:04:23.043 20:40:44 json_config -- common/autotest_common.sh@829 -- # '[' -z 59518 ']' 00:04:23.043 20:40:44 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.043 20:40:44 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.043 20:40:44 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.043 20:40:44 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.043 20:40:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.043 [2024-07-15 20:40:44.792836] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:23.043 [2024-07-15 20:40:44.792902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59518 ] 00:04:23.302 [2024-07-15 20:40:45.147801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.561 [2024-07-15 20:40:45.223163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.561 [2024-07-15 20:40:45.347928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:23.820 [2024-07-15 20:40:45.548323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.820 [2024-07-15 20:40:45.580330] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:23.820 20:40:45 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.820 00:04:23.820 20:40:45 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:23.820 20:40:45 json_config -- json_config/common.sh@26 -- # echo '' 00:04:23.820 20:40:45 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:23.820 INFO: Checking if target configuration is the same... 00:04:23.820 20:40:45 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:23.820 20:40:45 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:23.820 20:40:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.820 20:40:45 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:23.820 + '[' 2 -ne 2 ']' 00:04:23.820 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:23.820 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:23.820 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:23.820 +++ basename /dev/fd/62 00:04:23.820 ++ mktemp /tmp/62.XXX 00:04:23.820 + tmp_file_1=/tmp/62.ON3 00:04:23.820 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:23.820 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:23.820 + tmp_file_2=/tmp/spdk_tgt_config.json.z2p 00:04:23.820 + ret=0 00:04:23.820 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:24.078 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:24.337 + diff -u /tmp/62.ON3 /tmp/spdk_tgt_config.json.z2p 00:04:24.337 INFO: JSON config files are the same 00:04:24.337 + echo 'INFO: JSON config files are the same' 00:04:24.337 + rm /tmp/62.ON3 /tmp/spdk_tgt_config.json.z2p 00:04:24.337 + exit 0 00:04:24.337 20:40:46 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:24.337 INFO: changing configuration and checking if this can be detected... 00:04:24.337 20:40:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:24.337 20:40:46 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:24.337 20:40:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:24.337 20:40:46 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:24.337 20:40:46 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:24.337 20:40:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.337 + '[' 2 -ne 2 ']' 00:04:24.337 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:24.337 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:24.337 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:24.337 +++ basename /dev/fd/62 00:04:24.337 ++ mktemp /tmp/62.XXX 00:04:24.337 + tmp_file_1=/tmp/62.pyU 00:04:24.337 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:24.337 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:24.596 + tmp_file_2=/tmp/spdk_tgt_config.json.l91 00:04:24.596 + ret=0 00:04:24.596 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:24.855 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:24.855 + diff -u /tmp/62.pyU /tmp/spdk_tgt_config.json.l91 00:04:24.855 + ret=1 00:04:24.855 + echo '=== Start of file: /tmp/62.pyU ===' 00:04:24.855 + cat /tmp/62.pyU 00:04:24.855 + echo '=== End of file: /tmp/62.pyU ===' 00:04:24.855 + echo '' 00:04:24.855 + echo '=== Start of file: /tmp/spdk_tgt_config.json.l91 ===' 00:04:24.855 + cat /tmp/spdk_tgt_config.json.l91 00:04:24.855 + echo '=== End of file: /tmp/spdk_tgt_config.json.l91 ===' 00:04:24.855 + echo '' 00:04:24.855 + rm /tmp/62.pyU /tmp/spdk_tgt_config.json.l91 00:04:24.855 + exit 1 00:04:24.855 INFO: configuration change detected. 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@317 -- # [[ -n 59518 ]] 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.855 20:40:46 json_config -- json_config/json_config.sh@323 -- # killprocess 59518 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@948 -- # '[' -z 59518 ']' 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@952 -- # kill -0 59518 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@953 -- # uname 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59518 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:24.855 killing process with pid 59518 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59518' 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@967 -- # kill 59518 00:04:24.855 20:40:46 json_config -- common/autotest_common.sh@972 -- # wait 59518 00:04:25.115 20:40:46 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:25.115 20:40:46 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:25.115 20:40:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:25.115 20:40:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.115 20:40:47 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:25.115 INFO: Success 00:04:25.115 20:40:47 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:25.115 00:04:25.115 real 0m7.426s 00:04:25.115 user 0m10.117s 00:04:25.115 sys 0m1.761s 00:04:25.115 20:40:47 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.115 ************************************ 00:04:25.115 END TEST json_config 00:04:25.115 ************************************ 00:04:25.115 20:40:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.412 20:40:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.412 20:40:47 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:25.412 20:40:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.412 20:40:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.412 20:40:47 -- common/autotest_common.sh@10 -- # set +x 00:04:25.412 ************************************ 00:04:25.412 START TEST json_config_extra_key 00:04:25.412 ************************************ 00:04:25.412 20:40:47 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:25.412 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:25.412 20:40:47 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:25.412 20:40:47 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.412 20:40:47 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.412 20:40:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.412 20:40:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.412 20:40:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.412 20:40:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:25.412 20:40:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:25.412 20:40:47 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:25.412 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:25.412 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:25.412 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:25.412 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:25.413 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:25.413 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:25.413 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:25.413 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:25.413 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:25.413 INFO: launching applications... 00:04:25.413 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:25.413 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:25.413 20:40:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:25.413 20:40:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:25.413 20:40:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:25.413 20:40:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:25.413 20:40:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:25.413 20:40:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:25.413 20:40:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.413 20:40:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.413 20:40:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59653 00:04:25.413 Waiting for target to run... 00:04:25.413 20:40:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:25.413 20:40:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59653 /var/tmp/spdk_tgt.sock 00:04:25.413 20:40:47 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59653 ']' 00:04:25.413 20:40:47 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:25.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:25.413 20:40:47 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.413 20:40:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:25.413 20:40:47 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:25.413 20:40:47 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.413 20:40:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:25.413 [2024-07-15 20:40:47.272806] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:25.413 [2024-07-15 20:40:47.272973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59653 ] 00:04:25.980 [2024-07-15 20:40:47.633980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.981 [2024-07-15 20:40:47.709582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.981 [2024-07-15 20:40:47.729574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:26.549 20:40:48 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.549 20:40:48 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:26.549 00:04:26.549 INFO: shutting down applications... 00:04:26.549 20:40:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:26.549 20:40:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:26.549 20:40:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:26.549 20:40:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:26.549 20:40:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:26.549 20:40:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59653 ]] 00:04:26.549 20:40:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59653 00:04:26.549 20:40:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:26.549 20:40:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.549 20:40:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59653 00:04:26.549 20:40:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.808 20:40:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.808 20:40:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.808 20:40:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59653 00:04:26.808 20:40:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:26.808 20:40:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:26.808 20:40:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:26.808 20:40:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:26.808 SPDK target shutdown done 00:04:26.808 Success 00:04:26.808 20:40:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:26.808 00:04:26.808 real 0m1.576s 00:04:26.808 user 0m1.334s 00:04:26.808 sys 0m0.402s 00:04:26.808 20:40:48 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.808 20:40:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:26.808 ************************************ 00:04:26.808 END TEST json_config_extra_key 00:04:26.808 ************************************ 00:04:26.808 20:40:48 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.808 20:40:48 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:26.808 20:40:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.808 20:40:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.808 20:40:48 -- common/autotest_common.sh@10 -- # set +x 00:04:26.808 ************************************ 00:04:26.808 START TEST alias_rpc 00:04:26.808 ************************************ 00:04:26.808 20:40:48 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:27.067 * Looking for test storage... 00:04:27.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:27.067 20:40:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:27.067 20:40:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59722 00:04:27.067 20:40:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.067 20:40:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59722 00:04:27.067 20:40:48 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59722 ']' 00:04:27.067 20:40:48 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.067 20:40:48 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.067 20:40:48 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.067 20:40:48 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.067 20:40:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.067 [2024-07-15 20:40:48.892543] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:27.067 [2024-07-15 20:40:48.892649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59722 ] 00:04:27.325 [2024-07-15 20:40:49.042271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.325 [2024-07-15 20:40:49.138502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.325 [2024-07-15 20:40:49.180502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:27.891 20:40:49 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.891 20:40:49 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:27.891 20:40:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:28.150 20:40:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59722 00:04:28.150 20:40:49 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59722 ']' 00:04:28.150 20:40:49 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59722 00:04:28.150 20:40:49 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:28.150 20:40:49 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:28.150 20:40:49 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59722 00:04:28.150 20:40:49 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:28.150 20:40:49 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:28.150 killing process with pid 59722 00:04:28.150 20:40:49 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59722' 00:04:28.150 20:40:49 alias_rpc -- common/autotest_common.sh@967 -- # kill 59722 00:04:28.150 20:40:49 alias_rpc -- common/autotest_common.sh@972 -- # wait 59722 00:04:28.409 00:04:28.409 real 0m1.588s 00:04:28.409 user 0m1.669s 00:04:28.409 sys 0m0.424s 00:04:28.409 20:40:50 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.409 20:40:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.409 ************************************ 00:04:28.409 END TEST alias_rpc 00:04:28.409 ************************************ 00:04:28.668 20:40:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:28.668 20:40:50 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:28.668 20:40:50 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:28.668 20:40:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.668 20:40:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.668 20:40:50 -- common/autotest_common.sh@10 -- # set +x 00:04:28.668 ************************************ 00:04:28.668 START TEST spdkcli_tcp 00:04:28.668 ************************************ 00:04:28.668 20:40:50 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:28.668 * Looking for test storage... 00:04:28.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:28.668 20:40:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:28.668 20:40:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:28.668 20:40:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:28.668 20:40:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:28.668 20:40:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:28.668 20:40:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:28.668 20:40:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:28.668 20:40:50 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.668 20:40:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.668 20:40:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59794 00:04:28.668 20:40:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:28.668 20:40:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59794 00:04:28.668 20:40:50 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59794 ']' 00:04:28.668 20:40:50 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.668 20:40:50 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.668 20:40:50 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.668 20:40:50 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.668 20:40:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.668 [2024-07-15 20:40:50.550075] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:28.668 [2024-07-15 20:40:50.550158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59794 ] 00:04:28.927 [2024-07-15 20:40:50.695329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.927 [2024-07-15 20:40:50.827951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.927 [2024-07-15 20:40:50.827966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.186 [2024-07-15 20:40:50.879113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:29.753 20:40:51 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.753 20:40:51 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:29.753 20:40:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59811 00:04:29.753 20:40:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:29.753 20:40:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:30.013 [ 00:04:30.013 "bdev_malloc_delete", 00:04:30.013 "bdev_malloc_create", 00:04:30.013 "bdev_null_resize", 00:04:30.013 "bdev_null_delete", 00:04:30.013 "bdev_null_create", 00:04:30.013 "bdev_nvme_cuse_unregister", 00:04:30.013 "bdev_nvme_cuse_register", 00:04:30.013 "bdev_opal_new_user", 00:04:30.013 "bdev_opal_set_lock_state", 00:04:30.013 "bdev_opal_delete", 00:04:30.013 "bdev_opal_get_info", 00:04:30.013 "bdev_opal_create", 00:04:30.013 "bdev_nvme_opal_revert", 00:04:30.013 "bdev_nvme_opal_init", 00:04:30.013 "bdev_nvme_send_cmd", 00:04:30.013 "bdev_nvme_get_path_iostat", 00:04:30.013 "bdev_nvme_get_mdns_discovery_info", 00:04:30.013 "bdev_nvme_stop_mdns_discovery", 00:04:30.013 "bdev_nvme_start_mdns_discovery", 00:04:30.013 "bdev_nvme_set_multipath_policy", 00:04:30.013 "bdev_nvme_set_preferred_path", 00:04:30.013 "bdev_nvme_get_io_paths", 00:04:30.013 "bdev_nvme_remove_error_injection", 00:04:30.013 "bdev_nvme_add_error_injection", 00:04:30.013 "bdev_nvme_get_discovery_info", 00:04:30.013 "bdev_nvme_stop_discovery", 00:04:30.013 "bdev_nvme_start_discovery", 00:04:30.013 "bdev_nvme_get_controller_health_info", 00:04:30.013 "bdev_nvme_disable_controller", 00:04:30.013 "bdev_nvme_enable_controller", 00:04:30.013 "bdev_nvme_reset_controller", 00:04:30.013 "bdev_nvme_get_transport_statistics", 00:04:30.013 "bdev_nvme_apply_firmware", 00:04:30.013 "bdev_nvme_detach_controller", 00:04:30.013 "bdev_nvme_get_controllers", 00:04:30.013 "bdev_nvme_attach_controller", 00:04:30.013 "bdev_nvme_set_hotplug", 00:04:30.013 "bdev_nvme_set_options", 00:04:30.013 "bdev_passthru_delete", 00:04:30.013 "bdev_passthru_create", 00:04:30.013 "bdev_lvol_set_parent_bdev", 00:04:30.013 "bdev_lvol_set_parent", 00:04:30.013 "bdev_lvol_check_shallow_copy", 00:04:30.013 "bdev_lvol_start_shallow_copy", 00:04:30.013 "bdev_lvol_grow_lvstore", 00:04:30.013 "bdev_lvol_get_lvols", 00:04:30.013 "bdev_lvol_get_lvstores", 00:04:30.013 "bdev_lvol_delete", 00:04:30.013 "bdev_lvol_set_read_only", 00:04:30.013 "bdev_lvol_resize", 00:04:30.013 "bdev_lvol_decouple_parent", 00:04:30.013 "bdev_lvol_inflate", 00:04:30.013 "bdev_lvol_rename", 00:04:30.013 "bdev_lvol_clone_bdev", 00:04:30.013 "bdev_lvol_clone", 00:04:30.013 "bdev_lvol_snapshot", 00:04:30.013 "bdev_lvol_create", 00:04:30.013 "bdev_lvol_delete_lvstore", 00:04:30.013 "bdev_lvol_rename_lvstore", 00:04:30.013 "bdev_lvol_create_lvstore", 00:04:30.013 "bdev_raid_set_options", 00:04:30.013 "bdev_raid_remove_base_bdev", 00:04:30.013 "bdev_raid_add_base_bdev", 00:04:30.013 "bdev_raid_delete", 00:04:30.013 "bdev_raid_create", 00:04:30.013 "bdev_raid_get_bdevs", 00:04:30.013 "bdev_error_inject_error", 00:04:30.013 "bdev_error_delete", 00:04:30.013 "bdev_error_create", 00:04:30.013 "bdev_split_delete", 00:04:30.013 "bdev_split_create", 00:04:30.013 "bdev_delay_delete", 00:04:30.013 "bdev_delay_create", 00:04:30.013 "bdev_delay_update_latency", 00:04:30.013 "bdev_zone_block_delete", 00:04:30.013 "bdev_zone_block_create", 00:04:30.013 "blobfs_create", 00:04:30.013 "blobfs_detect", 00:04:30.013 "blobfs_set_cache_size", 00:04:30.013 "bdev_aio_delete", 00:04:30.013 "bdev_aio_rescan", 00:04:30.013 "bdev_aio_create", 00:04:30.013 "bdev_ftl_set_property", 00:04:30.013 "bdev_ftl_get_properties", 00:04:30.013 "bdev_ftl_get_stats", 00:04:30.013 "bdev_ftl_unmap", 00:04:30.013 "bdev_ftl_unload", 00:04:30.013 "bdev_ftl_delete", 00:04:30.013 "bdev_ftl_load", 00:04:30.013 "bdev_ftl_create", 00:04:30.013 "bdev_virtio_attach_controller", 00:04:30.013 "bdev_virtio_scsi_get_devices", 00:04:30.013 "bdev_virtio_detach_controller", 00:04:30.013 "bdev_virtio_blk_set_hotplug", 00:04:30.013 "bdev_iscsi_delete", 00:04:30.013 "bdev_iscsi_create", 00:04:30.013 "bdev_iscsi_set_options", 00:04:30.013 "bdev_uring_delete", 00:04:30.013 "bdev_uring_rescan", 00:04:30.013 "bdev_uring_create", 00:04:30.013 "accel_error_inject_error", 00:04:30.013 "ioat_scan_accel_module", 00:04:30.013 "dsa_scan_accel_module", 00:04:30.013 "iaa_scan_accel_module", 00:04:30.013 "keyring_file_remove_key", 00:04:30.013 "keyring_file_add_key", 00:04:30.013 "keyring_linux_set_options", 00:04:30.013 "iscsi_get_histogram", 00:04:30.013 "iscsi_enable_histogram", 00:04:30.013 "iscsi_set_options", 00:04:30.013 "iscsi_get_auth_groups", 00:04:30.013 "iscsi_auth_group_remove_secret", 00:04:30.013 "iscsi_auth_group_add_secret", 00:04:30.013 "iscsi_delete_auth_group", 00:04:30.013 "iscsi_create_auth_group", 00:04:30.013 "iscsi_set_discovery_auth", 00:04:30.013 "iscsi_get_options", 00:04:30.013 "iscsi_target_node_request_logout", 00:04:30.013 "iscsi_target_node_set_redirect", 00:04:30.013 "iscsi_target_node_set_auth", 00:04:30.013 "iscsi_target_node_add_lun", 00:04:30.013 "iscsi_get_stats", 00:04:30.013 "iscsi_get_connections", 00:04:30.013 "iscsi_portal_group_set_auth", 00:04:30.013 "iscsi_start_portal_group", 00:04:30.013 "iscsi_delete_portal_group", 00:04:30.013 "iscsi_create_portal_group", 00:04:30.013 "iscsi_get_portal_groups", 00:04:30.013 "iscsi_delete_target_node", 00:04:30.013 "iscsi_target_node_remove_pg_ig_maps", 00:04:30.013 "iscsi_target_node_add_pg_ig_maps", 00:04:30.013 "iscsi_create_target_node", 00:04:30.013 "iscsi_get_target_nodes", 00:04:30.013 "iscsi_delete_initiator_group", 00:04:30.013 "iscsi_initiator_group_remove_initiators", 00:04:30.013 "iscsi_initiator_group_add_initiators", 00:04:30.013 "iscsi_create_initiator_group", 00:04:30.013 "iscsi_get_initiator_groups", 00:04:30.013 "nvmf_set_crdt", 00:04:30.013 "nvmf_set_config", 00:04:30.013 "nvmf_set_max_subsystems", 00:04:30.013 "nvmf_stop_mdns_prr", 00:04:30.013 "nvmf_publish_mdns_prr", 00:04:30.013 "nvmf_subsystem_get_listeners", 00:04:30.013 "nvmf_subsystem_get_qpairs", 00:04:30.013 "nvmf_subsystem_get_controllers", 00:04:30.013 "nvmf_get_stats", 00:04:30.013 "nvmf_get_transports", 00:04:30.013 "nvmf_create_transport", 00:04:30.014 "nvmf_get_targets", 00:04:30.014 "nvmf_delete_target", 00:04:30.014 "nvmf_create_target", 00:04:30.014 "nvmf_subsystem_allow_any_host", 00:04:30.014 "nvmf_subsystem_remove_host", 00:04:30.014 "nvmf_subsystem_add_host", 00:04:30.014 "nvmf_ns_remove_host", 00:04:30.014 "nvmf_ns_add_host", 00:04:30.014 "nvmf_subsystem_remove_ns", 00:04:30.014 "nvmf_subsystem_add_ns", 00:04:30.014 "nvmf_subsystem_listener_set_ana_state", 00:04:30.014 "nvmf_discovery_get_referrals", 00:04:30.014 "nvmf_discovery_remove_referral", 00:04:30.014 "nvmf_discovery_add_referral", 00:04:30.014 "nvmf_subsystem_remove_listener", 00:04:30.014 "nvmf_subsystem_add_listener", 00:04:30.014 "nvmf_delete_subsystem", 00:04:30.014 "nvmf_create_subsystem", 00:04:30.014 "nvmf_get_subsystems", 00:04:30.014 "env_dpdk_get_mem_stats", 00:04:30.014 "nbd_get_disks", 00:04:30.014 "nbd_stop_disk", 00:04:30.014 "nbd_start_disk", 00:04:30.014 "ublk_recover_disk", 00:04:30.014 "ublk_get_disks", 00:04:30.014 "ublk_stop_disk", 00:04:30.014 "ublk_start_disk", 00:04:30.014 "ublk_destroy_target", 00:04:30.014 "ublk_create_target", 00:04:30.014 "virtio_blk_create_transport", 00:04:30.014 "virtio_blk_get_transports", 00:04:30.014 "vhost_controller_set_coalescing", 00:04:30.014 "vhost_get_controllers", 00:04:30.014 "vhost_delete_controller", 00:04:30.014 "vhost_create_blk_controller", 00:04:30.014 "vhost_scsi_controller_remove_target", 00:04:30.014 "vhost_scsi_controller_add_target", 00:04:30.014 "vhost_start_scsi_controller", 00:04:30.014 "vhost_create_scsi_controller", 00:04:30.014 "thread_set_cpumask", 00:04:30.014 "framework_get_governor", 00:04:30.014 "framework_get_scheduler", 00:04:30.014 "framework_set_scheduler", 00:04:30.014 "framework_get_reactors", 00:04:30.014 "thread_get_io_channels", 00:04:30.014 "thread_get_pollers", 00:04:30.014 "thread_get_stats", 00:04:30.014 "framework_monitor_context_switch", 00:04:30.014 "spdk_kill_instance", 00:04:30.014 "log_enable_timestamps", 00:04:30.014 "log_get_flags", 00:04:30.014 "log_clear_flag", 00:04:30.014 "log_set_flag", 00:04:30.014 "log_get_level", 00:04:30.014 "log_set_level", 00:04:30.014 "log_get_print_level", 00:04:30.014 "log_set_print_level", 00:04:30.014 "framework_enable_cpumask_locks", 00:04:30.014 "framework_disable_cpumask_locks", 00:04:30.014 "framework_wait_init", 00:04:30.014 "framework_start_init", 00:04:30.014 "scsi_get_devices", 00:04:30.014 "bdev_get_histogram", 00:04:30.014 "bdev_enable_histogram", 00:04:30.014 "bdev_set_qos_limit", 00:04:30.014 "bdev_set_qd_sampling_period", 00:04:30.014 "bdev_get_bdevs", 00:04:30.014 "bdev_reset_iostat", 00:04:30.014 "bdev_get_iostat", 00:04:30.014 "bdev_examine", 00:04:30.014 "bdev_wait_for_examine", 00:04:30.014 "bdev_set_options", 00:04:30.014 "notify_get_notifications", 00:04:30.014 "notify_get_types", 00:04:30.014 "accel_get_stats", 00:04:30.014 "accel_set_options", 00:04:30.014 "accel_set_driver", 00:04:30.014 "accel_crypto_key_destroy", 00:04:30.014 "accel_crypto_keys_get", 00:04:30.014 "accel_crypto_key_create", 00:04:30.014 "accel_assign_opc", 00:04:30.014 "accel_get_module_info", 00:04:30.014 "accel_get_opc_assignments", 00:04:30.014 "vmd_rescan", 00:04:30.014 "vmd_remove_device", 00:04:30.014 "vmd_enable", 00:04:30.014 "sock_get_default_impl", 00:04:30.014 "sock_set_default_impl", 00:04:30.014 "sock_impl_set_options", 00:04:30.014 "sock_impl_get_options", 00:04:30.014 "iobuf_get_stats", 00:04:30.014 "iobuf_set_options", 00:04:30.014 "framework_get_pci_devices", 00:04:30.014 "framework_get_config", 00:04:30.014 "framework_get_subsystems", 00:04:30.014 "trace_get_info", 00:04:30.014 "trace_get_tpoint_group_mask", 00:04:30.014 "trace_disable_tpoint_group", 00:04:30.014 "trace_enable_tpoint_group", 00:04:30.014 "trace_clear_tpoint_mask", 00:04:30.014 "trace_set_tpoint_mask", 00:04:30.014 "keyring_get_keys", 00:04:30.014 "spdk_get_version", 00:04:30.014 "rpc_get_methods" 00:04:30.014 ] 00:04:30.014 20:40:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.014 20:40:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:30.014 20:40:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59794 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59794 ']' 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59794 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59794 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.014 killing process with pid 59794 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59794' 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59794 00:04:30.014 20:40:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59794 00:04:30.273 00:04:30.273 real 0m1.780s 00:04:30.273 user 0m3.242s 00:04:30.273 sys 0m0.481s 00:04:30.273 20:40:52 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.273 20:40:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.273 ************************************ 00:04:30.273 END TEST spdkcli_tcp 00:04:30.273 ************************************ 00:04:30.532 20:40:52 -- common/autotest_common.sh@1142 -- # return 0 00:04:30.532 20:40:52 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.532 20:40:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.532 20:40:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.532 20:40:52 -- common/autotest_common.sh@10 -- # set +x 00:04:30.532 ************************************ 00:04:30.532 START TEST dpdk_mem_utility 00:04:30.532 ************************************ 00:04:30.532 20:40:52 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:30.532 * Looking for test storage... 00:04:30.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:30.532 20:40:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:30.532 20:40:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59879 00:04:30.532 20:40:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59879 00:04:30.532 20:40:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.532 20:40:52 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59879 ']' 00:04:30.532 20:40:52 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.532 20:40:52 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.532 20:40:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.532 20:40:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.532 20:40:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:30.532 [2024-07-15 20:40:52.360635] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:30.532 [2024-07-15 20:40:52.360718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59879 ] 00:04:30.792 [2024-07-15 20:40:52.496326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.792 [2024-07-15 20:40:52.608472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.792 [2024-07-15 20:40:52.651071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:31.360 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.360 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:31.360 20:40:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:31.360 20:40:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:31.360 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.360 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.360 { 00:04:31.360 "filename": "/tmp/spdk_mem_dump.txt" 00:04:31.360 } 00:04:31.360 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.360 20:40:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:31.360 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:31.360 1 heaps totaling size 814.000000 MiB 00:04:31.360 size: 814.000000 MiB heap id: 0 00:04:31.360 end heaps---------- 00:04:31.360 8 mempools totaling size 598.116089 MiB 00:04:31.360 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:31.360 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:31.360 size: 84.521057 MiB name: bdev_io_59879 00:04:31.360 size: 51.011292 MiB name: evtpool_59879 00:04:31.360 size: 50.003479 MiB name: msgpool_59879 00:04:31.360 size: 21.763794 MiB name: PDU_Pool 00:04:31.360 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:31.360 size: 0.026123 MiB name: Session_Pool 00:04:31.360 end mempools------- 00:04:31.360 6 memzones totaling size 4.142822 MiB 00:04:31.360 size: 1.000366 MiB name: RG_ring_0_59879 00:04:31.361 size: 1.000366 MiB name: RG_ring_1_59879 00:04:31.361 size: 1.000366 MiB name: RG_ring_4_59879 00:04:31.361 size: 1.000366 MiB name: RG_ring_5_59879 00:04:31.361 size: 0.125366 MiB name: RG_ring_2_59879 00:04:31.361 size: 0.015991 MiB name: RG_ring_3_59879 00:04:31.361 end memzones------- 00:04:31.361 20:40:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:31.620 heap id: 0 total size: 814.000000 MiB number of busy elements: 299 number of free elements: 15 00:04:31.620 list of free elements. size: 12.472107 MiB 00:04:31.620 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:31.620 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:31.620 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:31.620 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:31.620 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:31.620 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:31.620 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:31.620 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:31.620 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:31.620 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:04:31.620 element at address: 0x20000b200000 with size: 0.489624 MiB 00:04:31.620 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:31.620 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:31.620 element at address: 0x200027e00000 with size: 0.395935 MiB 00:04:31.620 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:31.620 list of standard malloc elements. size: 199.265320 MiB 00:04:31.620 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:31.620 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:31.620 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:31.620 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:31.620 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:31.620 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:31.620 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:31.620 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:31.620 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:31.620 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:31.620 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:31.620 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:31.620 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:31.620 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:31.620 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:31.620 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:31.620 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:31.620 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:31.620 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:31.620 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:31.621 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e65680 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:31.621 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:31.622 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:31.622 list of memzone associated elements. size: 602.262573 MiB 00:04:31.622 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:31.622 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:31.622 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:31.622 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:31.622 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:31.622 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59879_0 00:04:31.622 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:31.622 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59879_0 00:04:31.622 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:31.622 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59879_0 00:04:31.622 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:31.622 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:31.622 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:31.622 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:31.622 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:31.622 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59879 00:04:31.622 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:31.622 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59879 00:04:31.622 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:31.622 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59879 00:04:31.622 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:31.622 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:31.622 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:31.622 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:31.622 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:31.622 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:31.622 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:31.622 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:31.622 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:31.622 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59879 00:04:31.622 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:31.622 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59879 00:04:31.622 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:31.622 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59879 00:04:31.622 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:31.622 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59879 00:04:31.622 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:31.622 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59879 00:04:31.622 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:31.622 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:31.622 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:31.622 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:31.622 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:31.622 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:31.622 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:31.622 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59879 00:04:31.622 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:31.622 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:31.622 element at address: 0x200027e65740 with size: 0.023743 MiB 00:04:31.622 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:31.622 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:31.622 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59879 00:04:31.622 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:04:31.622 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:31.622 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:31.622 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59879 00:04:31.622 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:31.622 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59879 00:04:31.622 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:04:31.622 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:31.622 20:40:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:31.622 20:40:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59879 00:04:31.622 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59879 ']' 00:04:31.622 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59879 00:04:31.622 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:31.622 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:31.622 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59879 00:04:31.622 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:31.622 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:31.622 killing process with pid 59879 00:04:31.622 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59879' 00:04:31.622 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59879 00:04:31.622 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59879 00:04:31.881 00:04:31.881 real 0m1.485s 00:04:31.881 user 0m1.491s 00:04:31.881 sys 0m0.420s 00:04:31.881 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.881 20:40:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.881 ************************************ 00:04:31.881 END TEST dpdk_mem_utility 00:04:31.881 ************************************ 00:04:31.881 20:40:53 -- common/autotest_common.sh@1142 -- # return 0 00:04:31.881 20:40:53 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:31.881 20:40:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.881 20:40:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.881 20:40:53 -- common/autotest_common.sh@10 -- # set +x 00:04:31.881 ************************************ 00:04:31.881 START TEST event 00:04:31.881 ************************************ 00:04:31.881 20:40:53 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:32.139 * Looking for test storage... 00:04:32.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:32.139 20:40:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:32.139 20:40:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:32.139 20:40:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.139 20:40:53 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:32.139 20:40:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.139 20:40:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.139 ************************************ 00:04:32.139 START TEST event_perf 00:04:32.139 ************************************ 00:04:32.139 20:40:53 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:32.139 Running I/O for 1 seconds...[2024-07-15 20:40:53.914456] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:32.139 [2024-07-15 20:40:53.914565] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59951 ] 00:04:32.396 [2024-07-15 20:40:54.061005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:32.396 [2024-07-15 20:40:54.161387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.396 [2024-07-15 20:40:54.161459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.396 [2024-07-15 20:40:54.163841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:32.396 [2024-07-15 20:40:54.163844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.331 Running I/O for 1 seconds... 00:04:33.331 lcore 0: 172945 00:04:33.331 lcore 1: 172943 00:04:33.331 lcore 2: 172944 00:04:33.331 lcore 3: 172945 00:04:33.331 done. 00:04:33.331 00:04:33.331 real 0m1.350s 00:04:33.331 user 0m4.163s 00:04:33.331 sys 0m0.065s 00:04:33.331 20:40:55 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.331 20:40:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.331 ************************************ 00:04:33.331 END TEST event_perf 00:04:33.331 ************************************ 00:04:33.590 20:40:55 event -- common/autotest_common.sh@1142 -- # return 0 00:04:33.590 20:40:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:33.590 20:40:55 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:33.590 20:40:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.590 20:40:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.590 ************************************ 00:04:33.590 START TEST event_reactor 00:04:33.590 ************************************ 00:04:33.590 20:40:55 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:33.590 [2024-07-15 20:40:55.326891] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:33.590 [2024-07-15 20:40:55.326999] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59989 ] 00:04:33.590 [2024-07-15 20:40:55.470329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.849 [2024-07-15 20:40:55.566904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.786 test_start 00:04:34.786 oneshot 00:04:34.786 tick 100 00:04:34.786 tick 100 00:04:34.786 tick 250 00:04:34.786 tick 100 00:04:34.786 tick 100 00:04:34.786 tick 100 00:04:34.786 tick 250 00:04:34.786 tick 500 00:04:34.786 tick 100 00:04:34.786 tick 100 00:04:34.786 tick 250 00:04:34.786 tick 100 00:04:34.786 tick 100 00:04:34.786 test_end 00:04:34.786 00:04:34.786 real 0m1.336s 00:04:34.786 user 0m1.172s 00:04:34.786 sys 0m0.057s 00:04:34.786 20:40:56 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.786 20:40:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:34.786 ************************************ 00:04:34.786 END TEST event_reactor 00:04:34.786 ************************************ 00:04:34.786 20:40:56 event -- common/autotest_common.sh@1142 -- # return 0 00:04:34.786 20:40:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:34.786 20:40:56 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:34.786 20:40:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.786 20:40:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.074 ************************************ 00:04:35.074 START TEST event_reactor_perf 00:04:35.074 ************************************ 00:04:35.074 20:40:56 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.074 [2024-07-15 20:40:56.727941] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:35.074 [2024-07-15 20:40:56.728045] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60025 ] 00:04:35.074 [2024-07-15 20:40:56.870565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.074 [2024-07-15 20:40:56.964531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.446 test_start 00:04:36.446 test_end 00:04:36.446 Performance: 482477 events per second 00:04:36.446 00:04:36.446 real 0m1.329s 00:04:36.446 user 0m1.165s 00:04:36.446 sys 0m0.058s 00:04:36.446 20:40:58 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.446 20:40:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.446 ************************************ 00:04:36.446 END TEST event_reactor_perf 00:04:36.446 ************************************ 00:04:36.446 20:40:58 event -- common/autotest_common.sh@1142 -- # return 0 00:04:36.446 20:40:58 event -- event/event.sh@49 -- # uname -s 00:04:36.446 20:40:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:36.446 20:40:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:36.446 20:40:58 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.446 20:40:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.446 20:40:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.446 ************************************ 00:04:36.446 START TEST event_scheduler 00:04:36.446 ************************************ 00:04:36.446 20:40:58 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:36.446 * Looking for test storage... 00:04:36.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:36.446 20:40:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:36.446 20:40:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60081 00:04:36.446 20:40:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:36.446 20:40:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.446 20:40:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60081 00:04:36.446 20:40:58 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60081 ']' 00:04:36.446 20:40:58 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.446 20:40:58 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.446 20:40:58 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.446 20:40:58 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.446 20:40:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.446 [2024-07-15 20:40:58.261008] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:36.446 [2024-07-15 20:40:58.261458] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60081 ] 00:04:36.704 [2024-07-15 20:40:58.397384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:36.704 [2024-07-15 20:40:58.486903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.704 [2024-07-15 20:40:58.487009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.704 [2024-07-15 20:40:58.487012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:36.705 [2024-07-15 20:40:58.486942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.276 20:40:59 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.276 20:40:59 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:37.276 20:40:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:37.276 20:40:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.276 20:40:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.276 POWER: Cannot set governor of lcore 0 to userspace 00:04:37.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.276 POWER: Cannot set governor of lcore 0 to performance 00:04:37.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.276 POWER: Cannot set governor of lcore 0 to userspace 00:04:37.276 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.276 POWER: Cannot set governor of lcore 0 to userspace 00:04:37.276 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:37.276 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:37.276 POWER: Unable to set Power Management Environment for lcore 0 00:04:37.276 [2024-07-15 20:40:59.179106] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:37.276 [2024-07-15 20:40:59.179121] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:37.276 [2024-07-15 20:40:59.179129] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:37.276 [2024-07-15 20:40:59.179141] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:37.276 [2024-07-15 20:40:59.179148] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:37.276 [2024-07-15 20:40:59.179155] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:37.277 20:40:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.277 20:40:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:37.277 20:40:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.277 20:40:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.535 [2024-07-15 20:40:59.227554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:37.535 [2024-07-15 20:40:59.253558] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:37.535 20:40:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.535 20:40:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:37.535 20:40:59 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.535 20:40:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.535 20:40:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.535 ************************************ 00:04:37.535 START TEST scheduler_create_thread 00:04:37.535 ************************************ 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.535 2 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.535 3 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.535 4 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.535 5 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.535 6 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.535 7 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.535 8 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.535 9 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.535 10 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.535 20:40:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.909 20:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.909 20:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:38.909 20:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:38.909 20:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.909 20:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.843 20:41:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.843 20:41:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:39.843 20:41:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.843 20:41:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.781 20:41:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.781 20:41:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:40.781 20:41:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:40.781 20:41:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.781 20:41:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.349 20:41:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.349 00:04:41.349 real 0m3.881s 00:04:41.349 user 0m0.028s 00:04:41.349 sys 0m0.006s 00:04:41.349 20:41:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.349 20:41:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.349 ************************************ 00:04:41.349 END TEST scheduler_create_thread 00:04:41.349 ************************************ 00:04:41.349 20:41:03 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:41.349 20:41:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:41.349 20:41:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60081 00:04:41.349 20:41:03 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60081 ']' 00:04:41.349 20:41:03 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60081 00:04:41.349 20:41:03 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:41.349 20:41:03 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.349 20:41:03 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60081 00:04:41.349 20:41:03 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:41.349 20:41:03 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:41.349 killing process with pid 60081 00:04:41.349 20:41:03 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60081' 00:04:41.349 20:41:03 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60081 00:04:41.349 20:41:03 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60081 00:04:41.919 [2024-07-15 20:41:03.528568] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:41.919 00:04:41.919 real 0m5.703s 00:04:41.919 user 0m12.637s 00:04:41.919 sys 0m0.385s 00:04:41.919 20:41:03 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.919 ************************************ 00:04:41.919 END TEST event_scheduler 00:04:41.919 ************************************ 00:04:41.919 20:41:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.177 20:41:03 event -- common/autotest_common.sh@1142 -- # return 0 00:04:42.177 20:41:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:42.177 20:41:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:42.177 20:41:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.177 20:41:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.177 20:41:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.177 ************************************ 00:04:42.177 START TEST app_repeat 00:04:42.177 ************************************ 00:04:42.177 20:41:03 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60197 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.177 Process app_repeat pid: 60197 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60197' 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:42.177 spdk_app_start Round 0 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:42.177 20:41:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60197 /var/tmp/spdk-nbd.sock 00:04:42.177 20:41:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60197 ']' 00:04:42.177 20:41:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:42.177 20:41:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:42.177 20:41:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:42.178 20:41:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.178 20:41:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.178 [2024-07-15 20:41:03.907198] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:42.178 [2024-07-15 20:41:03.907282] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60197 ] 00:04:42.178 [2024-07-15 20:41:04.056553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.436 [2024-07-15 20:41:04.141437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.436 [2024-07-15 20:41:04.141443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.436 [2024-07-15 20:41:04.183674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:43.003 20:41:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.003 20:41:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:43.003 20:41:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.262 Malloc0 00:04:43.262 20:41:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.522 Malloc1 00:04:43.522 20:41:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.522 20:41:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:43.780 /dev/nbd0 00:04:43.780 20:41:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:43.780 20:41:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.780 1+0 records in 00:04:43.780 1+0 records out 00:04:43.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338652 s, 12.1 MB/s 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:43.780 20:41:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:43.780 20:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.780 20:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.780 20:41:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.039 /dev/nbd1 00:04:44.039 20:41:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.039 20:41:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.039 1+0 records in 00:04:44.039 1+0 records out 00:04:44.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334538 s, 12.2 MB/s 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:44.039 20:41:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:44.039 20:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.039 20:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.039 20:41:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.039 20:41:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.039 20:41:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:44.300 { 00:04:44.300 "nbd_device": "/dev/nbd0", 00:04:44.300 "bdev_name": "Malloc0" 00:04:44.300 }, 00:04:44.300 { 00:04:44.300 "nbd_device": "/dev/nbd1", 00:04:44.300 "bdev_name": "Malloc1" 00:04:44.300 } 00:04:44.300 ]' 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:44.300 { 00:04:44.300 "nbd_device": "/dev/nbd0", 00:04:44.300 "bdev_name": "Malloc0" 00:04:44.300 }, 00:04:44.300 { 00:04:44.300 "nbd_device": "/dev/nbd1", 00:04:44.300 "bdev_name": "Malloc1" 00:04:44.300 } 00:04:44.300 ]' 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:44.300 /dev/nbd1' 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:44.300 /dev/nbd1' 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:44.300 256+0 records in 00:04:44.300 256+0 records out 00:04:44.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126851 s, 82.7 MB/s 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:44.300 256+0 records in 00:04:44.300 256+0 records out 00:04:44.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307325 s, 34.1 MB/s 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:44.300 256+0 records in 00:04:44.300 256+0 records out 00:04:44.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317293 s, 33.0 MB/s 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.300 20:41:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:44.560 20:41:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:44.560 20:41:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:44.560 20:41:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:44.560 20:41:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.560 20:41:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.560 20:41:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:44.560 20:41:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.560 20:41:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.560 20:41:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.560 20:41:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:44.818 20:41:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:44.818 20:41:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:44.818 20:41:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:44.818 20:41:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.818 20:41:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.818 20:41:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:44.818 20:41:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.818 20:41:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.818 20:41:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.818 20:41:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.818 20:41:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:45.076 20:41:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:45.076 20:41:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:45.334 20:41:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:45.334 [2024-07-15 20:41:07.195813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.592 [2024-07-15 20:41:07.273919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.592 [2024-07-15 20:41:07.273923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.592 [2024-07-15 20:41:07.316034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:45.592 [2024-07-15 20:41:07.316103] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:45.592 [2024-07-15 20:41:07.316114] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:48.878 20:41:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:48.878 spdk_app_start Round 1 00:04:48.878 20:41:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:48.878 20:41:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60197 /var/tmp/spdk-nbd.sock 00:04:48.878 20:41:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60197 ']' 00:04:48.878 20:41:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.878 20:41:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.878 20:41:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.878 20:41:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.878 20:41:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:48.878 20:41:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.878 20:41:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:48.878 20:41:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.878 Malloc0 00:04:48.878 20:41:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.878 Malloc1 00:04:48.878 20:41:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.878 20:41:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:49.137 /dev/nbd0 00:04:49.137 20:41:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.137 20:41:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.137 1+0 records in 00:04:49.137 1+0 records out 00:04:49.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016111 s, 25.4 MB/s 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:49.137 20:41:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:49.137 20:41:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.137 20:41:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.137 20:41:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:49.396 /dev/nbd1 00:04:49.396 20:41:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:49.396 20:41:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.396 1+0 records in 00:04:49.396 1+0 records out 00:04:49.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403959 s, 10.1 MB/s 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:49.396 20:41:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:49.396 20:41:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.396 20:41:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.396 20:41:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.397 20:41:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.397 20:41:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.655 { 00:04:49.655 "nbd_device": "/dev/nbd0", 00:04:49.655 "bdev_name": "Malloc0" 00:04:49.655 }, 00:04:49.655 { 00:04:49.655 "nbd_device": "/dev/nbd1", 00:04:49.655 "bdev_name": "Malloc1" 00:04:49.655 } 00:04:49.655 ]' 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.655 { 00:04:49.655 "nbd_device": "/dev/nbd0", 00:04:49.655 "bdev_name": "Malloc0" 00:04:49.655 }, 00:04:49.655 { 00:04:49.655 "nbd_device": "/dev/nbd1", 00:04:49.655 "bdev_name": "Malloc1" 00:04:49.655 } 00:04:49.655 ]' 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.655 /dev/nbd1' 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.655 /dev/nbd1' 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.655 256+0 records in 00:04:49.655 256+0 records out 00:04:49.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489589 s, 214 MB/s 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.655 256+0 records in 00:04:49.655 256+0 records out 00:04:49.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185833 s, 56.4 MB/s 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.655 256+0 records in 00:04:49.655 256+0 records out 00:04:49.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349938 s, 30.0 MB/s 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.655 20:41:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.914 20:41:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.914 20:41:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.914 20:41:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.914 20:41:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.914 20:41:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.914 20:41:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.914 20:41:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.914 20:41:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.914 20:41:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.914 20:41:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:50.173 20:41:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:50.173 20:41:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:50.173 20:41:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:50.173 20:41:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.173 20:41:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.173 20:41:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:50.173 20:41:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.173 20:41:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.173 20:41:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.173 20:41:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.173 20:41:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.432 20:41:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.432 20:41:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.689 20:41:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:50.947 [2024-07-15 20:41:12.636030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.947 [2024-07-15 20:41:12.717520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.947 [2024-07-15 20:41:12.717526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.947 [2024-07-15 20:41:12.761046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:50.947 [2024-07-15 20:41:12.761117] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.948 [2024-07-15 20:41:12.761127] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.240 20:41:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.240 spdk_app_start Round 2 00:04:54.240 20:41:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:54.240 20:41:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60197 /var/tmp/spdk-nbd.sock 00:04:54.240 20:41:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60197 ']' 00:04:54.240 20:41:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.240 20:41:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.240 20:41:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.240 20:41:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.240 20:41:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.240 20:41:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.240 20:41:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:54.240 20:41:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.240 Malloc0 00:04:54.240 20:41:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.240 Malloc1 00:04:54.240 20:41:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.240 20:41:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.499 /dev/nbd0 00:04:54.499 20:41:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.499 20:41:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.499 1+0 records in 00:04:54.499 1+0 records out 00:04:54.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247283 s, 16.6 MB/s 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:54.499 20:41:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:54.499 20:41:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.499 20:41:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.499 20:41:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.760 /dev/nbd1 00:04:54.760 20:41:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.760 20:41:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.760 1+0 records in 00:04:54.760 1+0 records out 00:04:54.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250279 s, 16.4 MB/s 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:54.760 20:41:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:54.760 20:41:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.760 20:41:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.760 20:41:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.760 20:41:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.760 20:41:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.020 { 00:04:55.020 "nbd_device": "/dev/nbd0", 00:04:55.020 "bdev_name": "Malloc0" 00:04:55.020 }, 00:04:55.020 { 00:04:55.020 "nbd_device": "/dev/nbd1", 00:04:55.020 "bdev_name": "Malloc1" 00:04:55.020 } 00:04:55.020 ]' 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.020 { 00:04:55.020 "nbd_device": "/dev/nbd0", 00:04:55.020 "bdev_name": "Malloc0" 00:04:55.020 }, 00:04:55.020 { 00:04:55.020 "nbd_device": "/dev/nbd1", 00:04:55.020 "bdev_name": "Malloc1" 00:04:55.020 } 00:04:55.020 ]' 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.020 /dev/nbd1' 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.020 /dev/nbd1' 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.020 256+0 records in 00:04:55.020 256+0 records out 00:04:55.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124589 s, 84.2 MB/s 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.020 256+0 records in 00:04:55.020 256+0 records out 00:04:55.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291786 s, 35.9 MB/s 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.020 256+0 records in 00:04:55.020 256+0 records out 00:04:55.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030338 s, 34.6 MB/s 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.020 20:41:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.279 20:41:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.279 20:41:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.279 20:41:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.279 20:41:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.279 20:41:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.279 20:41:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.279 20:41:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.279 20:41:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.279 20:41:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.279 20:41:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.538 20:41:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.538 20:41:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.538 20:41:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.538 20:41:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.538 20:41:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.538 20:41:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.538 20:41:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.538 20:41:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.538 20:41:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.538 20:41:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.538 20:41:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.798 20:41:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.798 20:41:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.798 20:41:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.798 20:41:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.798 20:41:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.798 20:41:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.798 20:41:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.798 20:41:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.798 20:41:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.798 20:41:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.798 20:41:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.799 20:41:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.799 20:41:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.058 20:41:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.058 [2024-07-15 20:41:17.914316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.341 [2024-07-15 20:41:18.001990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.341 [2024-07-15 20:41:18.001995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.341 [2024-07-15 20:41:18.044026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:56.341 [2024-07-15 20:41:18.044099] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.341 [2024-07-15 20:41:18.044110] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.876 20:41:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60197 /var/tmp/spdk-nbd.sock 00:04:58.876 20:41:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60197 ']' 00:04:58.876 20:41:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.876 20:41:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.876 20:41:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.876 20:41:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.876 20:41:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.134 20:41:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.134 20:41:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:59.134 20:41:20 event.app_repeat -- event/event.sh@39 -- # killprocess 60197 00:04:59.134 20:41:20 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60197 ']' 00:04:59.134 20:41:20 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60197 00:04:59.134 20:41:20 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:59.134 20:41:20 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.134 20:41:20 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60197 00:04:59.134 killing process with pid 60197 00:04:59.134 20:41:21 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.134 20:41:21 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.134 20:41:21 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60197' 00:04:59.134 20:41:21 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60197 00:04:59.134 20:41:21 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60197 00:04:59.392 spdk_app_start is called in Round 0. 00:04:59.392 Shutdown signal received, stop current app iteration 00:04:59.392 Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 reinitialization... 00:04:59.392 spdk_app_start is called in Round 1. 00:04:59.392 Shutdown signal received, stop current app iteration 00:04:59.392 Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 reinitialization... 00:04:59.392 spdk_app_start is called in Round 2. 00:04:59.392 Shutdown signal received, stop current app iteration 00:04:59.392 Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 reinitialization... 00:04:59.392 spdk_app_start is called in Round 3. 00:04:59.392 Shutdown signal received, stop current app iteration 00:04:59.392 20:41:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:59.392 20:41:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:59.392 00:04:59.392 real 0m17.329s 00:04:59.392 user 0m37.839s 00:04:59.392 sys 0m2.765s 00:04:59.392 20:41:21 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.392 20:41:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.392 ************************************ 00:04:59.392 END TEST app_repeat 00:04:59.392 ************************************ 00:04:59.392 20:41:21 event -- common/autotest_common.sh@1142 -- # return 0 00:04:59.392 20:41:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:59.392 20:41:21 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:59.392 20:41:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.392 20:41:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.392 20:41:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.392 ************************************ 00:04:59.392 START TEST cpu_locks 00:04:59.392 ************************************ 00:04:59.392 20:41:21 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:59.666 * Looking for test storage... 00:04:59.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:59.666 20:41:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:59.666 20:41:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:59.666 20:41:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:59.666 20:41:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:59.666 20:41:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.666 20:41:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.666 20:41:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.666 ************************************ 00:04:59.666 START TEST default_locks 00:04:59.666 ************************************ 00:04:59.666 20:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:59.666 20:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60608 00:04:59.666 20:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.666 20:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60608 00:04:59.666 20:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60608 ']' 00:04:59.666 20:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.666 20:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.666 20:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.666 20:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.666 20:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.666 [2024-07-15 20:41:21.481304] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:04:59.666 [2024-07-15 20:41:21.481553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60608 ] 00:04:59.956 [2024-07-15 20:41:21.621673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.956 [2024-07-15 20:41:21.717429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.956 [2024-07-15 20:41:21.759627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:00.523 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.523 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:00.523 20:41:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60608 00:05:00.523 20:41:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.523 20:41:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60608 00:05:01.090 20:41:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60608 00:05:01.091 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60608 ']' 00:05:01.091 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60608 00:05:01.091 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:01.091 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.091 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60608 00:05:01.091 killing process with pid 60608 00:05:01.091 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.091 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.091 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60608' 00:05:01.091 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60608 00:05:01.091 20:41:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60608 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60608 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60608 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60608 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60608 ']' 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.349 ERROR: process (pid: 60608) is no longer running 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.349 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60608) - No such process 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:01.349 00:05:01.349 real 0m1.719s 00:05:01.349 user 0m1.773s 00:05:01.349 sys 0m0.536s 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.349 ************************************ 00:05:01.349 END TEST default_locks 00:05:01.349 ************************************ 00:05:01.349 20:41:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.349 20:41:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:01.349 20:41:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:01.349 20:41:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.349 20:41:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.349 20:41:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.349 ************************************ 00:05:01.349 START TEST default_locks_via_rpc 00:05:01.349 ************************************ 00:05:01.349 20:41:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:01.349 20:41:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60654 00:05:01.349 20:41:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60654 00:05:01.349 20:41:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.349 20:41:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60654 ']' 00:05:01.349 20:41:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.349 20:41:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.349 20:41:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.350 20:41:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.350 20:41:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.608 [2024-07-15 20:41:23.271629] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:01.608 [2024-07-15 20:41:23.271694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60654 ] 00:05:01.609 [2024-07-15 20:41:23.415090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.609 [2024-07-15 20:41:23.514795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.868 [2024-07-15 20:41:23.556449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60654 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60654 00:05:02.438 20:41:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.019 20:41:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60654 00:05:03.019 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60654 ']' 00:05:03.019 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60654 00:05:03.019 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:03.019 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.019 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60654 00:05:03.019 killing process with pid 60654 00:05:03.019 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.019 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.019 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60654' 00:05:03.019 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60654 00:05:03.019 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60654 00:05:03.278 00:05:03.278 real 0m1.757s 00:05:03.278 user 0m1.824s 00:05:03.278 sys 0m0.544s 00:05:03.278 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.278 ************************************ 00:05:03.278 END TEST default_locks_via_rpc 00:05:03.278 ************************************ 00:05:03.278 20:41:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.278 20:41:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:03.278 20:41:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:03.278 20:41:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.278 20:41:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.278 20:41:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.278 ************************************ 00:05:03.278 START TEST non_locking_app_on_locked_coremask 00:05:03.278 ************************************ 00:05:03.278 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:03.278 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60705 00:05:03.278 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.278 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60705 /var/tmp/spdk.sock 00:05:03.278 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60705 ']' 00:05:03.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.278 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.278 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.278 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.278 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.279 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.279 [2024-07-15 20:41:25.098842] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:03.279 [2024-07-15 20:41:25.098912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60705 ] 00:05:03.538 [2024-07-15 20:41:25.241147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.538 [2024-07-15 20:41:25.335995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.538 [2024-07-15 20:41:25.378018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:04.108 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.108 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:04.108 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60721 00:05:04.108 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60721 /var/tmp/spdk2.sock 00:05:04.108 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:04.108 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60721 ']' 00:05:04.108 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.108 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.108 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.108 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.108 20:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.108 [2024-07-15 20:41:25.979524] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:04.108 [2024-07-15 20:41:25.979595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60721 ] 00:05:04.367 [2024-07-15 20:41:26.113798] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.367 [2024-07-15 20:41:26.113844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.627 [2024-07-15 20:41:26.306338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.627 [2024-07-15 20:41:26.392065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:05.196 20:41:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.196 20:41:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:05.196 20:41:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60705 00:05:05.196 20:41:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.196 20:41:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60705 00:05:06.195 20:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60705 00:05:06.196 20:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60705 ']' 00:05:06.196 20:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60705 00:05:06.196 20:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:06.196 20:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.196 20:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60705 00:05:06.196 killing process with pid 60705 00:05:06.196 20:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.196 20:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.196 20:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60705' 00:05:06.196 20:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60705 00:05:06.196 20:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60705 00:05:06.762 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60721 00:05:06.762 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60721 ']' 00:05:06.762 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60721 00:05:06.762 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:06.762 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.762 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60721 00:05:06.762 killing process with pid 60721 00:05:06.762 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.762 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.762 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60721' 00:05:06.762 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60721 00:05:06.762 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60721 00:05:07.019 00:05:07.019 real 0m3.749s 00:05:07.019 user 0m4.031s 00:05:07.019 sys 0m1.068s 00:05:07.019 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.019 ************************************ 00:05:07.019 END TEST non_locking_app_on_locked_coremask 00:05:07.019 ************************************ 00:05:07.019 20:41:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.019 20:41:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:07.019 20:41:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:07.019 20:41:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.019 20:41:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.019 20:41:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.019 ************************************ 00:05:07.019 START TEST locking_app_on_unlocked_coremask 00:05:07.019 ************************************ 00:05:07.019 20:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:07.019 20:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60783 00:05:07.019 20:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:07.019 20:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60783 /var/tmp/spdk.sock 00:05:07.019 20:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60783 ']' 00:05:07.019 20:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.019 20:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.019 20:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.019 20:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.019 20:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.019 [2024-07-15 20:41:28.920930] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:07.019 [2024-07-15 20:41:28.920996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60783 ] 00:05:07.276 [2024-07-15 20:41:29.063284] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:07.276 [2024-07-15 20:41:29.063337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.276 [2024-07-15 20:41:29.156291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.534 [2024-07-15 20:41:29.197720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:08.100 20:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.100 20:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:08.100 20:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60799 00:05:08.100 20:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60799 /var/tmp/spdk2.sock 00:05:08.100 20:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:08.100 20:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60799 ']' 00:05:08.100 20:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.100 20:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.100 20:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.100 20:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.100 20:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.100 [2024-07-15 20:41:29.803319] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:08.100 [2024-07-15 20:41:29.803576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60799 ] 00:05:08.100 [2024-07-15 20:41:29.939771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.359 [2024-07-15 20:41:30.135587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.359 [2024-07-15 20:41:30.221586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:08.926 20:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.926 20:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:08.926 20:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60799 00:05:08.926 20:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60799 00:05:08.926 20:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.597 20:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60783 00:05:09.597 20:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60783 ']' 00:05:09.597 20:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60783 00:05:09.856 20:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:09.856 20:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.856 20:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60783 00:05:09.856 20:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.856 20:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.856 killing process with pid 60783 00:05:09.856 20:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60783' 00:05:09.856 20:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60783 00:05:09.856 20:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60783 00:05:10.424 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60799 00:05:10.424 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60799 ']' 00:05:10.424 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60799 00:05:10.424 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:10.424 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.424 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60799 00:05:10.424 killing process with pid 60799 00:05:10.424 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.424 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.424 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60799' 00:05:10.424 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60799 00:05:10.424 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60799 00:05:10.683 ************************************ 00:05:10.683 END TEST locking_app_on_unlocked_coremask 00:05:10.683 ************************************ 00:05:10.683 00:05:10.683 real 0m3.655s 00:05:10.683 user 0m3.972s 00:05:10.683 sys 0m1.027s 00:05:10.683 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.683 20:41:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.683 20:41:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:10.683 20:41:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:10.683 20:41:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.683 20:41:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.683 20:41:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.683 ************************************ 00:05:10.683 START TEST locking_app_on_locked_coremask 00:05:10.683 ************************************ 00:05:10.683 20:41:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:10.941 20:41:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60866 00:05:10.941 20:41:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.941 20:41:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60866 /var/tmp/spdk.sock 00:05:10.941 20:41:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60866 ']' 00:05:10.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.941 20:41:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.941 20:41:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.941 20:41:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.941 20:41:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.941 20:41:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.941 [2024-07-15 20:41:32.647279] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:10.941 [2024-07-15 20:41:32.647344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60866 ] 00:05:10.941 [2024-07-15 20:41:32.786599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.200 [2024-07-15 20:41:32.879341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.200 [2024-07-15 20:41:32.921096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:11.766 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.766 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:11.766 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60876 00:05:11.766 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60876 /var/tmp/spdk2.sock 00:05:11.766 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:11.766 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:11.766 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60876 /var/tmp/spdk2.sock 00:05:11.766 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:11.766 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.767 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:11.767 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.767 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60876 /var/tmp/spdk2.sock 00:05:11.767 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60876 ']' 00:05:11.767 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.767 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.767 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.767 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.767 20:41:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.767 [2024-07-15 20:41:33.525747] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:11.767 [2024-07-15 20:41:33.526364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60876 ] 00:05:11.767 [2024-07-15 20:41:33.660630] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60866 has claimed it. 00:05:11.767 [2024-07-15 20:41:33.660678] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:12.333 ERROR: process (pid: 60876) is no longer running 00:05:12.333 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60876) - No such process 00:05:12.333 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.333 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:12.333 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:12.333 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.333 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.333 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.333 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60866 00:05:12.333 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60866 00:05:12.333 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.930 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60866 00:05:12.930 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60866 ']' 00:05:12.930 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60866 00:05:12.930 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:12.930 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.930 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60866 00:05:12.930 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.930 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.930 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60866' 00:05:12.930 killing process with pid 60866 00:05:12.930 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60866 00:05:12.930 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60866 00:05:13.189 00:05:13.189 real 0m2.344s 00:05:13.189 user 0m2.589s 00:05:13.189 sys 0m0.579s 00:05:13.189 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.189 ************************************ 00:05:13.189 END TEST locking_app_on_locked_coremask 00:05:13.189 ************************************ 00:05:13.189 20:41:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.189 20:41:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:13.189 20:41:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:13.189 20:41:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.189 20:41:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.189 20:41:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.189 ************************************ 00:05:13.189 START TEST locking_overlapped_coremask 00:05:13.189 ************************************ 00:05:13.189 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:13.189 20:41:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60922 00:05:13.189 20:41:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60922 /var/tmp/spdk.sock 00:05:13.189 20:41:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:13.189 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60922 ']' 00:05:13.189 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.189 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.189 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.189 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.189 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.189 [2024-07-15 20:41:35.059861] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:13.189 [2024-07-15 20:41:35.059927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60922 ] 00:05:13.448 [2024-07-15 20:41:35.203926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.448 [2024-07-15 20:41:35.292368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.448 [2024-07-15 20:41:35.292510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.448 [2024-07-15 20:41:35.292513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.448 [2024-07-15 20:41:35.334488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60940 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60940 /var/tmp/spdk2.sock 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60940 /var/tmp/spdk2.sock 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60940 /var/tmp/spdk2.sock 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60940 ']' 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.016 20:41:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.276 [2024-07-15 20:41:35.945142] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:14.276 [2024-07-15 20:41:35.945340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60940 ] 00:05:14.276 [2024-07-15 20:41:36.081622] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60922 has claimed it. 00:05:14.276 [2024-07-15 20:41:36.081678] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:14.844 ERROR: process (pid: 60940) is no longer running 00:05:14.844 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60940) - No such process 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60922 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60922 ']' 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60922 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60922 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60922' 00:05:14.844 killing process with pid 60922 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60922 00:05:14.844 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60922 00:05:15.103 00:05:15.103 real 0m1.962s 00:05:15.103 user 0m5.286s 00:05:15.103 sys 0m0.378s 00:05:15.103 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.103 20:41:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.103 ************************************ 00:05:15.103 END TEST locking_overlapped_coremask 00:05:15.104 ************************************ 00:05:15.363 20:41:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:15.363 20:41:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:15.363 20:41:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.363 20:41:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.363 20:41:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.363 ************************************ 00:05:15.363 START TEST locking_overlapped_coremask_via_rpc 00:05:15.363 ************************************ 00:05:15.363 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:15.363 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60980 00:05:15.363 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60980 /var/tmp/spdk.sock 00:05:15.363 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:15.363 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60980 ']' 00:05:15.363 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.363 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.363 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.363 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.363 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.363 [2024-07-15 20:41:37.090263] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:15.363 [2024-07-15 20:41:37.090452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60980 ] 00:05:15.363 [2024-07-15 20:41:37.216902] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.363 [2024-07-15 20:41:37.217155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.622 [2024-07-15 20:41:37.323510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.622 [2024-07-15 20:41:37.323613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.622 [2024-07-15 20:41:37.323613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.622 [2024-07-15 20:41:37.373345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:16.210 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.210 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.210 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60998 00:05:16.210 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60998 /var/tmp/spdk2.sock 00:05:16.210 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60998 ']' 00:05:16.210 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.210 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:16.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.210 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.210 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.210 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.210 20:41:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.210 [2024-07-15 20:41:38.027659] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:16.210 [2024-07-15 20:41:38.027914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60998 ] 00:05:16.468 [2024-07-15 20:41:38.164693] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.468 [2024-07-15 20:41:38.164743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.468 [2024-07-15 20:41:38.367924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.468 [2024-07-15 20:41:38.368023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.468 [2024-07-15 20:41:38.368027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:16.727 [2024-07-15 20:41:38.452746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.294 [2024-07-15 20:41:38.939294] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60980 has claimed it. 00:05:17.294 request: 00:05:17.294 { 00:05:17.294 "method": "framework_enable_cpumask_locks", 00:05:17.294 "req_id": 1 00:05:17.294 } 00:05:17.294 Got JSON-RPC error response 00:05:17.294 response: 00:05:17.294 { 00:05:17.294 "code": -32603, 00:05:17.294 "message": "Failed to claim CPU core: 2" 00:05:17.294 } 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60980 /var/tmp/spdk.sock 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60980 ']' 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.294 20:41:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.294 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.294 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:17.294 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60998 /var/tmp/spdk2.sock 00:05:17.294 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60998 ']' 00:05:17.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.294 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.294 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.294 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.294 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.294 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.552 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.552 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:17.552 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:17.552 ************************************ 00:05:17.552 END TEST locking_overlapped_coremask_via_rpc 00:05:17.552 ************************************ 00:05:17.552 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:17.552 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:17.552 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:17.552 00:05:17.552 real 0m2.363s 00:05:17.552 user 0m1.087s 00:05:17.552 sys 0m0.202s 00:05:17.552 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.552 20:41:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.552 20:41:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:17.552 20:41:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:17.552 20:41:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60980 ]] 00:05:17.552 20:41:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60980 00:05:17.552 20:41:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60980 ']' 00:05:17.552 20:41:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60980 00:05:17.810 20:41:39 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:17.810 20:41:39 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.810 20:41:39 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60980 00:05:17.810 killing process with pid 60980 00:05:17.810 20:41:39 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.810 20:41:39 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.810 20:41:39 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60980' 00:05:17.810 20:41:39 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60980 00:05:17.810 20:41:39 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60980 00:05:18.067 20:41:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60998 ]] 00:05:18.067 20:41:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60998 00:05:18.067 20:41:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60998 ']' 00:05:18.067 20:41:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60998 00:05:18.067 20:41:39 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:18.067 20:41:39 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.067 20:41:39 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60998 00:05:18.067 killing process with pid 60998 00:05:18.067 20:41:39 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:18.067 20:41:39 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:18.067 20:41:39 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60998' 00:05:18.067 20:41:39 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60998 00:05:18.067 20:41:39 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60998 00:05:18.325 20:41:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.325 20:41:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:18.325 Process with pid 60980 is not found 00:05:18.325 Process with pid 60998 is not found 00:05:18.325 20:41:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60980 ]] 00:05:18.325 20:41:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60980 00:05:18.325 20:41:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60980 ']' 00:05:18.325 20:41:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60980 00:05:18.325 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60980) - No such process 00:05:18.325 20:41:40 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60980 is not found' 00:05:18.325 20:41:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60998 ]] 00:05:18.325 20:41:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60998 00:05:18.325 20:41:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60998 ']' 00:05:18.325 20:41:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60998 00:05:18.325 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60998) - No such process 00:05:18.325 20:41:40 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60998 is not found' 00:05:18.325 20:41:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.325 00:05:18.325 real 0m18.948s 00:05:18.325 user 0m31.822s 00:05:18.325 sys 0m5.284s 00:05:18.325 20:41:40 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.325 20:41:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.325 ************************************ 00:05:18.325 END TEST cpu_locks 00:05:18.325 ************************************ 00:05:18.584 20:41:40 event -- common/autotest_common.sh@1142 -- # return 0 00:05:18.584 00:05:18.584 real 0m46.532s 00:05:18.584 user 1m28.987s 00:05:18.584 sys 0m8.962s 00:05:18.584 ************************************ 00:05:18.584 END TEST event 00:05:18.584 ************************************ 00:05:18.584 20:41:40 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.584 20:41:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.584 20:41:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.584 20:41:40 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:18.584 20:41:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.584 20:41:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.584 20:41:40 -- common/autotest_common.sh@10 -- # set +x 00:05:18.584 ************************************ 00:05:18.584 START TEST thread 00:05:18.584 ************************************ 00:05:18.584 20:41:40 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:18.584 * Looking for test storage... 00:05:18.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:18.843 20:41:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:18.843 20:41:40 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:18.843 20:41:40 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.843 20:41:40 thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.843 ************************************ 00:05:18.843 START TEST thread_poller_perf 00:05:18.843 ************************************ 00:05:18.843 20:41:40 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:18.843 [2024-07-15 20:41:40.539549] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:18.843 [2024-07-15 20:41:40.539643] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61115 ] 00:05:18.843 [2024-07-15 20:41:40.681640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.101 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:19.101 [2024-07-15 20:41:40.781926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.040 ====================================== 00:05:20.040 busy:2500621346 (cyc) 00:05:20.040 total_run_count: 373000 00:05:20.040 tsc_hz: 2490000000 (cyc) 00:05:20.040 ====================================== 00:05:20.040 poller_cost: 6704 (cyc), 2692 (nsec) 00:05:20.040 00:05:20.040 real 0m1.348s 00:05:20.040 user 0m1.182s 00:05:20.040 sys 0m0.058s 00:05:20.041 20:41:41 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.041 20:41:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 ************************************ 00:05:20.041 END TEST thread_poller_perf 00:05:20.041 ************************************ 00:05:20.041 20:41:41 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:20.041 20:41:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.041 20:41:41 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:20.041 20:41:41 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.041 20:41:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.041 ************************************ 00:05:20.041 START TEST thread_poller_perf 00:05:20.041 ************************************ 00:05:20.041 20:41:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.300 [2024-07-15 20:41:41.955547] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:20.300 [2024-07-15 20:41:41.955643] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61156 ] 00:05:20.300 [2024-07-15 20:41:42.099745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.300 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:20.300 [2024-07-15 20:41:42.199656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.678 ====================================== 00:05:21.678 busy:2492446870 (cyc) 00:05:21.678 total_run_count: 5047000 00:05:21.678 tsc_hz: 2490000000 (cyc) 00:05:21.678 ====================================== 00:05:21.678 poller_cost: 493 (cyc), 197 (nsec) 00:05:21.678 00:05:21.678 real 0m1.338s 00:05:21.678 user 0m1.177s 00:05:21.678 sys 0m0.053s 00:05:21.678 20:41:43 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.678 20:41:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.678 ************************************ 00:05:21.678 END TEST thread_poller_perf 00:05:21.678 ************************************ 00:05:21.678 20:41:43 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:21.678 20:41:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:21.678 00:05:21.678 real 0m2.960s 00:05:21.678 user 0m2.466s 00:05:21.678 sys 0m0.284s 00:05:21.678 20:41:43 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.678 20:41:43 thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.679 ************************************ 00:05:21.679 END TEST thread 00:05:21.679 ************************************ 00:05:21.679 20:41:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.679 20:41:43 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:21.679 20:41:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.679 20:41:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.679 20:41:43 -- common/autotest_common.sh@10 -- # set +x 00:05:21.679 ************************************ 00:05:21.679 START TEST accel 00:05:21.679 ************************************ 00:05:21.679 20:41:43 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:21.679 * Looking for test storage... 00:05:21.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:21.679 20:41:43 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:21.679 20:41:43 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:21.679 20:41:43 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.679 20:41:43 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61225 00:05:21.679 20:41:43 accel -- accel/accel.sh@63 -- # waitforlisten 61225 00:05:21.679 20:41:43 accel -- common/autotest_common.sh@829 -- # '[' -z 61225 ']' 00:05:21.679 20:41:43 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.679 20:41:43 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.679 20:41:43 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.679 20:41:43 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.679 20:41:43 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:21.679 20:41:43 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:21.679 20:41:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.679 20:41:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.679 20:41:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.679 20:41:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.679 20:41:43 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.679 20:41:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.679 20:41:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:21.679 20:41:43 accel -- accel/accel.sh@41 -- # jq -r . 00:05:21.679 [2024-07-15 20:41:43.579999] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:21.679 [2024-07-15 20:41:43.580078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61225 ] 00:05:21.938 [2024-07-15 20:41:43.721530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.938 [2024-07-15 20:41:43.823652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.197 [2024-07-15 20:41:43.864926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:22.197 20:41:44 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.197 20:41:44 accel -- common/autotest_common.sh@862 -- # return 0 00:05:22.197 20:41:44 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:22.197 20:41:44 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:22.197 20:41:44 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:22.197 20:41:44 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:22.197 20:41:44 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:22.197 20:41:44 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:22.197 20:41:44 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.197 20:41:44 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:22.197 20:41:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.197 20:41:44 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # IFS== 00:05:22.197 20:41:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:22.197 20:41:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:22.197 20:41:44 accel -- accel/accel.sh@75 -- # killprocess 61225 00:05:22.197 20:41:44 accel -- common/autotest_common.sh@948 -- # '[' -z 61225 ']' 00:05:22.197 20:41:44 accel -- common/autotest_common.sh@952 -- # kill -0 61225 00:05:22.197 20:41:44 accel -- common/autotest_common.sh@953 -- # uname 00:05:22.197 20:41:44 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.197 20:41:44 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61225 00:05:22.456 20:41:44 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.456 20:41:44 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.456 20:41:44 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61225' 00:05:22.456 killing process with pid 61225 00:05:22.456 20:41:44 accel -- common/autotest_common.sh@967 -- # kill 61225 00:05:22.456 20:41:44 accel -- common/autotest_common.sh@972 -- # wait 61225 00:05:22.715 20:41:44 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:22.715 20:41:44 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:22.715 20:41:44 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:22.715 20:41:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.715 20:41:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.715 20:41:44 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:22.715 20:41:44 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:22.715 20:41:44 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:22.715 20:41:44 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.715 20:41:44 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.715 20:41:44 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.715 20:41:44 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.715 20:41:44 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.715 20:41:44 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:22.715 20:41:44 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:22.715 20:41:44 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.715 20:41:44 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:22.715 20:41:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.715 20:41:44 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:22.715 20:41:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:22.715 20:41:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.715 20:41:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.715 ************************************ 00:05:22.715 START TEST accel_missing_filename 00:05:22.715 ************************************ 00:05:22.715 20:41:44 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:22.715 20:41:44 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:22.715 20:41:44 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:22.715 20:41:44 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:22.715 20:41:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.715 20:41:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:22.715 20:41:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.715 20:41:44 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:22.715 20:41:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:22.715 20:41:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:22.715 20:41:44 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.715 20:41:44 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.715 20:41:44 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.715 20:41:44 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.715 20:41:44 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.715 20:41:44 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:22.716 20:41:44 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:22.716 [2024-07-15 20:41:44.596966] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:22.716 [2024-07-15 20:41:44.597057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61269 ] 00:05:22.975 [2024-07-15 20:41:44.740057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.975 [2024-07-15 20:41:44.850201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.260 [2024-07-15 20:41:44.894190] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.260 [2024-07-15 20:41:44.956172] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:23.260 A filename is required. 00:05:23.260 20:41:45 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:23.260 20:41:45 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.260 20:41:45 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:23.260 20:41:45 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:23.260 20:41:45 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:23.260 20:41:45 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.260 00:05:23.260 real 0m0.474s 00:05:23.260 user 0m0.304s 00:05:23.260 sys 0m0.109s 00:05:23.260 20:41:45 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.260 ************************************ 00:05:23.260 END TEST accel_missing_filename 00:05:23.260 20:41:45 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:23.260 ************************************ 00:05:23.260 20:41:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.260 20:41:45 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.260 20:41:45 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:23.260 20:41:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.260 20:41:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.260 ************************************ 00:05:23.261 START TEST accel_compress_verify 00:05:23.261 ************************************ 00:05:23.261 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.261 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:23.261 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.261 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:23.261 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.261 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:23.261 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.261 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.261 20:41:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.261 20:41:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:23.261 20:41:45 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.261 20:41:45 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.261 20:41:45 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.261 20:41:45 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.261 20:41:45 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.261 20:41:45 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:23.261 20:41:45 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:23.261 [2024-07-15 20:41:45.145855] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:23.261 [2024-07-15 20:41:45.145940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61299 ] 00:05:23.519 [2024-07-15 20:41:45.289329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.519 [2024-07-15 20:41:45.392114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.779 [2024-07-15 20:41:45.436831] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.779 [2024-07-15 20:41:45.499372] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:23.779 00:05:23.779 Compression does not support the verify option, aborting. 00:05:23.779 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:23.779 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.779 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:23.779 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:23.779 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:23.779 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.779 00:05:23.779 real 0m0.471s 00:05:23.779 user 0m0.290s 00:05:23.779 sys 0m0.117s 00:05:23.779 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.779 20:41:45 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:23.779 ************************************ 00:05:23.779 END TEST accel_compress_verify 00:05:23.779 ************************************ 00:05:23.779 20:41:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.779 20:41:45 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:23.779 20:41:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:23.779 20:41:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.779 20:41:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.779 ************************************ 00:05:23.779 START TEST accel_wrong_workload 00:05:23.779 ************************************ 00:05:23.779 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:23.779 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:23.779 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:23.779 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:23.779 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.779 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:23.779 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.779 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:23.779 20:41:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:23.779 20:41:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:23.779 20:41:45 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.779 20:41:45 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.779 20:41:45 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.779 20:41:45 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.779 20:41:45 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.779 20:41:45 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:23.779 20:41:45 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:24.038 Unsupported workload type: foobar 00:05:24.038 [2024-07-15 20:41:45.689763] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:24.038 accel_perf options: 00:05:24.038 [-h help message] 00:05:24.038 [-q queue depth per core] 00:05:24.038 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:24.038 [-T number of threads per core 00:05:24.038 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:24.038 [-t time in seconds] 00:05:24.038 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:24.038 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:24.038 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:24.038 [-l for compress/decompress workloads, name of uncompressed input file 00:05:24.038 [-S for crc32c workload, use this seed value (default 0) 00:05:24.038 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:24.038 [-f for fill workload, use this BYTE value (default 255) 00:05:24.038 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:24.038 [-y verify result if this switch is on] 00:05:24.038 [-a tasks to allocate per core (default: same value as -q)] 00:05:24.038 Can be used to spread operations across a wider range of memory. 00:05:24.038 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:24.038 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.038 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.038 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.038 00:05:24.038 real 0m0.044s 00:05:24.038 user 0m0.020s 00:05:24.038 sys 0m0.023s 00:05:24.038 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.038 20:41:45 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:24.039 ************************************ 00:05:24.039 END TEST accel_wrong_workload 00:05:24.039 ************************************ 00:05:24.039 20:41:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.039 20:41:45 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:24.039 20:41:45 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:24.039 20:41:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.039 20:41:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.039 ************************************ 00:05:24.039 START TEST accel_negative_buffers 00:05:24.039 ************************************ 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:24.039 20:41:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:24.039 20:41:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:24.039 20:41:45 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.039 20:41:45 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.039 20:41:45 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.039 20:41:45 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.039 20:41:45 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.039 20:41:45 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:24.039 20:41:45 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:24.039 -x option must be non-negative. 00:05:24.039 [2024-07-15 20:41:45.795852] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:24.039 accel_perf options: 00:05:24.039 [-h help message] 00:05:24.039 [-q queue depth per core] 00:05:24.039 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:24.039 [-T number of threads per core 00:05:24.039 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:24.039 [-t time in seconds] 00:05:24.039 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:24.039 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:24.039 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:24.039 [-l for compress/decompress workloads, name of uncompressed input file 00:05:24.039 [-S for crc32c workload, use this seed value (default 0) 00:05:24.039 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:24.039 [-f for fill workload, use this BYTE value (default 255) 00:05:24.039 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:24.039 [-y verify result if this switch is on] 00:05:24.039 [-a tasks to allocate per core (default: same value as -q)] 00:05:24.039 Can be used to spread operations across a wider range of memory. 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.039 00:05:24.039 real 0m0.042s 00:05:24.039 user 0m0.029s 00:05:24.039 sys 0m0.013s 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.039 20:41:45 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:24.039 ************************************ 00:05:24.039 END TEST accel_negative_buffers 00:05:24.039 ************************************ 00:05:24.039 20:41:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.039 20:41:45 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:24.039 20:41:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:24.039 20:41:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.039 20:41:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.039 ************************************ 00:05:24.039 START TEST accel_crc32c 00:05:24.039 ************************************ 00:05:24.039 20:41:45 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:24.039 20:41:45 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:24.039 [2024-07-15 20:41:45.906559] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:24.039 [2024-07-15 20:41:45.906648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61361 ] 00:05:24.297 [2024-07-15 20:41:46.053841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.297 [2024-07-15 20:41:46.157927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.297 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.556 20:41:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:25.494 20:41:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.494 00:05:25.494 real 0m1.473s 00:05:25.494 user 0m0.022s 00:05:25.494 sys 0m0.005s 00:05:25.494 20:41:47 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.494 20:41:47 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:25.494 ************************************ 00:05:25.494 END TEST accel_crc32c 00:05:25.494 ************************************ 00:05:25.753 20:41:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.753 20:41:47 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:25.753 20:41:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:25.753 20:41:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.753 20:41:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.753 ************************************ 00:05:25.753 START TEST accel_crc32c_C2 00:05:25.753 ************************************ 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:25.754 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:25.754 [2024-07-15 20:41:47.450290] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:25.754 [2024-07-15 20:41:47.450376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61392 ] 00:05:25.754 [2024-07-15 20:41:47.592536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.013 [2024-07-15 20:41:47.693411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.013 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.014 20:41:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.392 00:05:27.392 real 0m1.459s 00:05:27.392 user 0m1.265s 00:05:27.392 sys 0m0.108s 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.392 20:41:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:27.392 ************************************ 00:05:27.392 END TEST accel_crc32c_C2 00:05:27.392 ************************************ 00:05:27.392 20:41:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.392 20:41:48 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:27.392 20:41:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:27.392 20:41:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.392 20:41:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.392 ************************************ 00:05:27.392 START TEST accel_copy 00:05:27.392 ************************************ 00:05:27.392 20:41:48 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:27.392 20:41:48 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:27.392 [2024-07-15 20:41:48.979385] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:27.392 [2024-07-15 20:41:48.979464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61422 ] 00:05:27.392 [2024-07-15 20:41:49.120649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.392 [2024-07-15 20:41:49.214336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.392 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.393 20:41:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.769 ************************************ 00:05:28.769 END TEST accel_copy 00:05:28.769 ************************************ 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:28.769 20:41:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.769 00:05:28.769 real 0m1.451s 00:05:28.769 user 0m1.253s 00:05:28.769 sys 0m0.110s 00:05:28.769 20:41:50 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.769 20:41:50 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:28.769 20:41:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.769 20:41:50 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.769 20:41:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:28.769 20:41:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.769 20:41:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.769 ************************************ 00:05:28.769 START TEST accel_fill 00:05:28.769 ************************************ 00:05:28.769 20:41:50 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:28.769 20:41:50 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:28.769 [2024-07-15 20:41:50.497365] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:28.769 [2024-07-15 20:41:50.497597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61461 ] 00:05:28.769 [2024-07-15 20:41:50.639495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.028 [2024-07-15 20:41:50.738779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.028 20:41:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.404 20:41:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.404 20:41:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:30.405 20:41:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.405 00:05:30.405 real 0m1.451s 00:05:30.405 user 0m1.257s 00:05:30.405 sys 0m0.107s 00:05:30.405 20:41:51 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.405 ************************************ 00:05:30.405 END TEST accel_fill 00:05:30.405 ************************************ 00:05:30.405 20:41:51 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:30.405 20:41:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.405 20:41:51 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:30.405 20:41:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:30.405 20:41:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.405 20:41:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.405 ************************************ 00:05:30.405 START TEST accel_copy_crc32c 00:05:30.405 ************************************ 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:30.405 20:41:51 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:30.405 [2024-07-15 20:41:52.025686] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:30.405 [2024-07-15 20:41:52.025782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61490 ] 00:05:30.405 [2024-07-15 20:41:52.167696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.405 [2024-07-15 20:41:52.264186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.405 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.664 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.665 20:41:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.599 ************************************ 00:05:31.599 END TEST accel_copy_crc32c 00:05:31.599 ************************************ 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.599 00:05:31.599 real 0m1.457s 00:05:31.599 user 0m1.261s 00:05:31.599 sys 0m0.107s 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.599 20:41:53 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:31.599 20:41:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.599 20:41:53 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:31.599 20:41:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:31.599 20:41:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.599 20:41:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.857 ************************************ 00:05:31.857 START TEST accel_copy_crc32c_C2 00:05:31.857 ************************************ 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:31.857 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:31.857 [2024-07-15 20:41:53.549649] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:31.857 [2024-07-15 20:41:53.549891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61530 ] 00:05:31.857 [2024-07-15 20:41:53.684186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.118 [2024-07-15 20:41:53.779683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.118 20:41:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.055 00:05:33.055 real 0m1.443s 00:05:33.055 user 0m1.250s 00:05:33.055 sys 0m0.104s 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.055 ************************************ 00:05:33.055 END TEST accel_copy_crc32c_C2 00:05:33.055 ************************************ 00:05:33.055 20:41:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:33.315 20:41:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.315 20:41:55 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:33.315 20:41:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:33.315 20:41:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.315 20:41:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.315 ************************************ 00:05:33.315 START TEST accel_dualcast 00:05:33.315 ************************************ 00:05:33.315 20:41:55 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.315 20:41:55 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.316 20:41:55 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:33.316 20:41:55 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:33.316 [2024-07-15 20:41:55.068666] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:33.316 [2024-07-15 20:41:55.068751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61559 ] 00:05:33.316 [2024-07-15 20:41:55.209010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.574 [2024-07-15 20:41:55.300174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.574 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.575 20:41:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:34.952 20:41:56 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.952 00:05:34.952 real 0m1.439s 00:05:34.952 user 0m1.251s 00:05:34.952 sys 0m0.101s 00:05:34.952 ************************************ 00:05:34.952 END TEST accel_dualcast 00:05:34.952 20:41:56 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.952 20:41:56 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:34.952 ************************************ 00:05:34.952 20:41:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.952 20:41:56 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:34.952 20:41:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.952 20:41:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.952 20:41:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.952 ************************************ 00:05:34.952 START TEST accel_compare 00:05:34.952 ************************************ 00:05:34.952 20:41:56 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:34.952 20:41:56 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:34.952 [2024-07-15 20:41:56.569957] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:34.952 [2024-07-15 20:41:56.570048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61599 ] 00:05:34.952 [2024-07-15 20:41:56.711969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.952 [2024-07-15 20:41:56.804238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.953 20:41:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:36.331 20:41:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.331 00:05:36.331 real 0m1.442s 00:05:36.331 user 0m1.248s 00:05:36.331 sys 0m0.107s 00:05:36.331 20:41:57 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.331 ************************************ 00:05:36.331 END TEST accel_compare 00:05:36.331 ************************************ 00:05:36.331 20:41:57 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:36.331 20:41:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.331 20:41:58 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:36.331 20:41:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:36.331 20:41:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.331 20:41:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.331 ************************************ 00:05:36.331 START TEST accel_xor 00:05:36.331 ************************************ 00:05:36.331 20:41:58 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:36.331 20:41:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:36.331 [2024-07-15 20:41:58.078713] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:36.332 [2024-07-15 20:41:58.078790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61628 ] 00:05:36.332 [2024-07-15 20:41:58.220649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.591 [2024-07-15 20:41:58.308382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.591 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.592 20:41:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.971 00:05:37.971 real 0m1.439s 00:05:37.971 user 0m1.252s 00:05:37.971 sys 0m0.098s 00:05:37.971 20:41:59 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.971 ************************************ 00:05:37.971 END TEST accel_xor 00:05:37.971 ************************************ 00:05:37.971 20:41:59 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:37.971 20:41:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.971 20:41:59 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:37.971 20:41:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:37.971 20:41:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.971 20:41:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.971 ************************************ 00:05:37.971 START TEST accel_xor 00:05:37.971 ************************************ 00:05:37.971 20:41:59 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:37.971 [2024-07-15 20:41:59.584430] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:37.971 [2024-07-15 20:41:59.584764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61663 ] 00:05:37.971 [2024-07-15 20:41:59.728941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.971 [2024-07-15 20:41:59.817861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.971 20:41:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:39.350 20:42:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.350 00:05:39.350 real 0m1.446s 00:05:39.350 user 0m1.250s 00:05:39.350 sys 0m0.106s 00:05:39.350 20:42:00 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.350 20:42:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:39.350 ************************************ 00:05:39.350 END TEST accel_xor 00:05:39.350 ************************************ 00:05:39.350 20:42:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.350 20:42:01 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:39.350 20:42:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:39.350 20:42:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.350 20:42:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.350 ************************************ 00:05:39.350 START TEST accel_dif_verify 00:05:39.350 ************************************ 00:05:39.350 20:42:01 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:39.350 20:42:01 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:39.350 [2024-07-15 20:42:01.101296] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:39.350 [2024-07-15 20:42:01.101524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61697 ] 00:05:39.350 [2024-07-15 20:42:01.239724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.608 [2024-07-15 20:42:01.314411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.608 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.609 20:42:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:41.005 20:42:02 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.005 00:05:41.005 real 0m1.425s 00:05:41.005 user 0m1.239s 00:05:41.005 sys 0m0.103s 00:05:41.005 20:42:02 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.005 ************************************ 00:05:41.005 END TEST accel_dif_verify 00:05:41.005 20:42:02 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:41.005 ************************************ 00:05:41.005 20:42:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.005 20:42:02 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:41.005 20:42:02 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:41.005 20:42:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.005 20:42:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.005 ************************************ 00:05:41.005 START TEST accel_dif_generate 00:05:41.005 ************************************ 00:05:41.005 20:42:02 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:41.006 [2024-07-15 20:42:02.589065] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:41.006 [2024-07-15 20:42:02.589161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61734 ] 00:05:41.006 [2024-07-15 20:42:02.732596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.006 [2024-07-15 20:42:02.822729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.006 20:42:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:42.381 20:42:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.381 00:05:42.381 real 0m1.440s 00:05:42.381 user 0m1.247s 00:05:42.381 sys 0m0.097s 00:05:42.381 20:42:03 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.381 20:42:04 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:42.381 ************************************ 00:05:42.381 END TEST accel_dif_generate 00:05:42.381 ************************************ 00:05:42.381 20:42:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.381 20:42:04 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:42.381 20:42:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:42.381 20:42:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.381 20:42:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.381 ************************************ 00:05:42.381 START TEST accel_dif_generate_copy 00:05:42.381 ************************************ 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:42.381 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:42.381 [2024-07-15 20:42:04.094217] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:42.381 [2024-07-15 20:42:04.094291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61768 ] 00:05:42.381 [2024-07-15 20:42:04.229133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.640 [2024-07-15 20:42:04.316855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.640 20:42:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.016 00:05:44.016 real 0m1.426s 00:05:44.016 user 0m1.246s 00:05:44.016 sys 0m0.094s 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.016 ************************************ 00:05:44.016 20:42:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:44.016 END TEST accel_dif_generate_copy 00:05:44.016 ************************************ 00:05:44.016 20:42:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.016 20:42:05 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:44.016 20:42:05 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:44.016 20:42:05 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:44.016 20:42:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.016 20:42:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.016 ************************************ 00:05:44.016 START TEST accel_comp 00:05:44.016 ************************************ 00:05:44.016 20:42:05 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:44.016 [2024-07-15 20:42:05.592058] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:44.016 [2024-07-15 20:42:05.592127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61803 ] 00:05:44.016 [2024-07-15 20:42:05.731896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.016 [2024-07-15 20:42:05.814316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.016 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.017 20:42:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.393 20:42:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.394 ************************************ 00:05:45.394 END TEST accel_comp 00:05:45.394 ************************************ 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:45.394 20:42:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.394 00:05:45.394 real 0m1.433s 00:05:45.394 user 0m1.254s 00:05:45.394 sys 0m0.094s 00:05:45.394 20:42:06 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.394 20:42:06 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:45.394 20:42:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.394 20:42:07 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:45.394 20:42:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:45.394 20:42:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.394 20:42:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.394 ************************************ 00:05:45.394 START TEST accel_decomp 00:05:45.394 ************************************ 00:05:45.394 20:42:07 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:45.394 20:42:07 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:45.394 [2024-07-15 20:42:07.094060] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:45.394 [2024-07-15 20:42:07.094298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61832 ] 00:05:45.394 [2024-07-15 20:42:07.235408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.652 [2024-07-15 20:42:07.307908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.652 20:42:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:46.589 20:42:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.589 00:05:46.589 real 0m1.425s 00:05:46.589 user 0m0.020s 00:05:46.589 sys 0m0.002s 00:05:46.589 20:42:08 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.589 20:42:08 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:46.589 ************************************ 00:05:46.589 END TEST accel_decomp 00:05:46.589 ************************************ 00:05:46.848 20:42:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.848 20:42:08 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:46.848 20:42:08 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:46.848 20:42:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.848 20:42:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.848 ************************************ 00:05:46.848 START TEST accel_decomp_full 00:05:46.848 ************************************ 00:05:46.848 20:42:08 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:46.848 20:42:08 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:46.848 [2024-07-15 20:42:08.582225] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:46.848 [2024-07-15 20:42:08.582948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61872 ] 00:05:46.848 [2024-07-15 20:42:08.724886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.107 [2024-07-15 20:42:08.801086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.107 20:42:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.483 20:42:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.484 20:42:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.484 20:42:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.484 20:42:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:48.484 20:42:09 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.484 00:05:48.484 real 0m1.444s 00:05:48.484 user 0m1.251s 00:05:48.484 sys 0m0.103s 00:05:48.484 20:42:09 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.484 20:42:09 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:48.484 ************************************ 00:05:48.484 END TEST accel_decomp_full 00:05:48.484 ************************************ 00:05:48.484 20:42:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.484 20:42:10 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:48.484 20:42:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:48.484 20:42:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.484 20:42:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.484 ************************************ 00:05:48.484 START TEST accel_decomp_mcore 00:05:48.484 ************************************ 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:48.484 [2024-07-15 20:42:10.093506] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:48.484 [2024-07-15 20:42:10.093597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61901 ] 00:05:48.484 [2024-07-15 20:42:10.235270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.484 [2024-07-15 20:42:10.314103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.484 [2024-07-15 20:42:10.314146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.484 [2024-07-15 20:42:10.314325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.484 [2024-07-15 20:42:10.314327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.484 20:42:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.904 ************************************ 00:05:49.904 END TEST accel_decomp_mcore 00:05:49.904 ************************************ 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.904 00:05:49.904 real 0m1.447s 00:05:49.904 user 0m4.545s 00:05:49.904 sys 0m0.117s 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.904 20:42:11 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:49.904 20:42:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.904 20:42:11 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:49.904 20:42:11 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:49.904 20:42:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.904 20:42:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.904 ************************************ 00:05:49.904 START TEST accel_decomp_full_mcore 00:05:49.904 ************************************ 00:05:49.904 20:42:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:49.904 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:49.904 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:49.904 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.904 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:49.904 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.904 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:49.904 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:49.904 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.904 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.904 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.905 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.905 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.905 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:49.905 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:49.905 [2024-07-15 20:42:11.606098] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:49.905 [2024-07-15 20:42:11.606212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61938 ] 00:05:49.905 [2024-07-15 20:42:11.746900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.164 [2024-07-15 20:42:11.825695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.164 [2024-07-15 20:42:11.825907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.164 [2024-07-15 20:42:11.826777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.164 [2024-07-15 20:42:11.826777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.164 20:42:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 ************************************ 00:05:51.546 END TEST accel_decomp_full_mcore 00:05:51.546 ************************************ 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.546 00:05:51.546 real 0m1.449s 00:05:51.546 user 0m0.015s 00:05:51.546 sys 0m0.001s 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.546 20:42:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:51.546 20:42:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.546 20:42:13 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:51.546 20:42:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:51.546 20:42:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.546 20:42:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.546 ************************************ 00:05:51.546 START TEST accel_decomp_mthread 00:05:51.546 ************************************ 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:51.546 [2024-07-15 20:42:13.126615] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:51.546 [2024-07-15 20:42:13.126699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61976 ] 00:05:51.546 [2024-07-15 20:42:13.268747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.546 [2024-07-15 20:42:13.358737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.546 20:42:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.922 ************************************ 00:05:52.922 END TEST accel_decomp_mthread 00:05:52.922 ************************************ 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.922 00:05:52.922 real 0m1.447s 00:05:52.922 user 0m0.018s 00:05:52.922 sys 0m0.004s 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.922 20:42:14 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:52.922 20:42:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.922 20:42:14 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:52.922 20:42:14 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:52.922 20:42:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.922 20:42:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.922 ************************************ 00:05:52.922 START TEST accel_decomp_full_mthread 00:05:52.922 ************************************ 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:52.922 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:52.922 [2024-07-15 20:42:14.635834] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:52.922 [2024-07-15 20:42:14.636045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62013 ] 00:05:52.922 [2024-07-15 20:42:14.777717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.182 [2024-07-15 20:42:14.864279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.182 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.183 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.183 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.183 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.183 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.183 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.183 20:42:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.563 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.564 ************************************ 00:05:54.564 END TEST accel_decomp_full_mthread 00:05:54.564 ************************************ 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.564 00:05:54.564 real 0m1.465s 00:05:54.564 user 0m1.268s 00:05:54.564 sys 0m0.106s 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.564 20:42:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:54.564 20:42:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.564 20:42:16 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:54.564 20:42:16 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:54.564 20:42:16 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:54.564 20:42:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.564 20:42:16 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:54.564 20:42:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.564 20:42:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.564 20:42:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.564 20:42:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.564 20:42:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.564 20:42:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.564 20:42:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:54.564 20:42:16 accel -- accel/accel.sh@41 -- # jq -r . 00:05:54.564 ************************************ 00:05:54.564 START TEST accel_dif_functional_tests 00:05:54.564 ************************************ 00:05:54.564 20:42:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:54.564 [2024-07-15 20:42:16.190365] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:54.564 [2024-07-15 20:42:16.190431] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62049 ] 00:05:54.564 [2024-07-15 20:42:16.330335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.564 [2024-07-15 20:42:16.419615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.564 [2024-07-15 20:42:16.419805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.564 [2024-07-15 20:42:16.419807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.564 [2024-07-15 20:42:16.460903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.824 00:05:54.824 00:05:54.824 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.824 http://cunit.sourceforge.net/ 00:05:54.824 00:05:54.824 00:05:54.824 Suite: accel_dif 00:05:54.824 Test: verify: DIF generated, GUARD check ...passed 00:05:54.824 Test: verify: DIF generated, APPTAG check ...passed 00:05:54.824 Test: verify: DIF generated, REFTAG check ...passed 00:05:54.824 Test: verify: DIF not generated, GUARD check ...[2024-07-15 20:42:16.489984] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:54.824 passed 00:05:54.824 Test: verify: DIF not generated, APPTAG check ...passed 00:05:54.824 Test: verify: DIF not generated, REFTAG check ...passed[2024-07-15 20:42:16.490225] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:54.824 [2024-07-15 20:42:16.490323] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:54.824 00:05:54.824 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:54.824 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:54.824 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-15 20:42:16.490541] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:54.824 passed 00:05:54.824 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:54.824 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:54.824 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 20:42:16.490867] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:54.824 passed 00:05:54.824 Test: verify copy: DIF generated, GUARD check ...passed 00:05:54.824 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:54.824 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:54.824 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 20:42:16.491274] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:54.824 passed 00:05:54.824 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 20:42:16.491412] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:54.824 passed 00:05:54.824 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 20:42:16.491541] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5apassed 00:05:54.824 Test: generate copy: DIF generated, GUARD check ...5a 00:05:54.824 passed 00:05:54.824 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:54.824 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:54.824 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:54.824 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:54.824 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:54.824 Test: generate copy: iovecs-len validate ...[2024-07-15 20:42:16.492188] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned passed 00:05:54.824 Test: generate copy: buffer alignment validate ...with block_size. 00:05:54.824 passed 00:05:54.824 00:05:54.824 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.824 suites 1 1 n/a 0 0 00:05:54.824 tests 26 26 26 0 0 00:05:54.824 asserts 115 115 115 0 n/a 00:05:54.824 00:05:54.824 Elapsed time = 0.006 seconds 00:05:54.824 00:05:54.824 real 0m0.523s 00:05:54.824 user 0m0.642s 00:05:54.824 sys 0m0.140s 00:05:54.824 20:42:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.824 20:42:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:54.824 ************************************ 00:05:54.824 END TEST accel_dif_functional_tests 00:05:54.824 ************************************ 00:05:54.824 20:42:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.824 00:05:54.824 real 0m33.328s 00:05:54.824 user 0m34.647s 00:05:54.824 sys 0m4.044s 00:05:54.824 ************************************ 00:05:54.824 END TEST accel 00:05:54.824 ************************************ 00:05:54.824 20:42:16 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.824 20:42:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.084 20:42:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:55.084 20:42:16 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:55.084 20:42:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.084 20:42:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.084 20:42:16 -- common/autotest_common.sh@10 -- # set +x 00:05:55.084 ************************************ 00:05:55.084 START TEST accel_rpc 00:05:55.084 ************************************ 00:05:55.084 20:42:16 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:55.084 * Looking for test storage... 00:05:55.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:55.084 20:42:16 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.084 20:42:16 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62119 00:05:55.084 20:42:16 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:55.084 20:42:16 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62119 00:05:55.084 20:42:16 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62119 ']' 00:05:55.084 20:42:16 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.084 20:42:16 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.084 20:42:16 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.084 20:42:16 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.084 20:42:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.084 [2024-07-15 20:42:16.969459] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:55.084 [2024-07-15 20:42:16.969537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62119 ] 00:05:55.344 [2024-07-15 20:42:17.111068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.344 [2024-07-15 20:42:17.195418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.912 20:42:17 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.912 20:42:17 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.912 20:42:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:55.912 20:42:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:55.912 20:42:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:55.912 20:42:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:55.912 20:42:17 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:55.912 20:42:17 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.912 20:42:17 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.912 20:42:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.171 ************************************ 00:05:56.171 START TEST accel_assign_opcode 00:05:56.171 ************************************ 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.171 [2024-07-15 20:42:17.835068] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.171 [2024-07-15 20:42:17.847043] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.171 20:42:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.171 [2024-07-15 20:42:17.895697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.171 20:42:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.171 20:42:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:56.171 20:42:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:56.171 20:42:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.171 20:42:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:56.171 20:42:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.171 20:42:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.171 software 00:05:56.171 00:05:56.171 real 0m0.239s 00:05:56.171 user 0m0.054s 00:05:56.171 sys 0m0.011s 00:05:56.171 20:42:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.171 20:42:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.171 ************************************ 00:05:56.171 END TEST accel_assign_opcode 00:05:56.171 ************************************ 00:05:56.431 20:42:18 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.431 20:42:18 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62119 00:05:56.431 20:42:18 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62119 ']' 00:05:56.431 20:42:18 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62119 00:05:56.431 20:42:18 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:56.431 20:42:18 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.431 20:42:18 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62119 00:05:56.431 killing process with pid 62119 00:05:56.431 20:42:18 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.431 20:42:18 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.431 20:42:18 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62119' 00:05:56.431 20:42:18 accel_rpc -- common/autotest_common.sh@967 -- # kill 62119 00:05:56.431 20:42:18 accel_rpc -- common/autotest_common.sh@972 -- # wait 62119 00:05:56.690 00:05:56.690 real 0m1.688s 00:05:56.690 user 0m1.695s 00:05:56.690 sys 0m0.439s 00:05:56.690 20:42:18 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.690 ************************************ 00:05:56.690 END TEST accel_rpc 00:05:56.690 ************************************ 00:05:56.690 20:42:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.690 20:42:18 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.690 20:42:18 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:56.690 20:42:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.690 20:42:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.690 20:42:18 -- common/autotest_common.sh@10 -- # set +x 00:05:56.690 ************************************ 00:05:56.690 START TEST app_cmdline 00:05:56.690 ************************************ 00:05:56.690 20:42:18 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:56.949 * Looking for test storage... 00:05:56.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:56.949 20:42:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:56.949 20:42:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62201 00:05:56.949 20:42:18 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:56.949 20:42:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62201 00:05:56.949 20:42:18 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62201 ']' 00:05:56.949 20:42:18 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.949 20:42:18 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.949 20:42:18 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.949 20:42:18 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.949 20:42:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:56.949 [2024-07-15 20:42:18.726541] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:05:56.949 [2024-07-15 20:42:18.726608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62201 ] 00:05:57.208 [2024-07-15 20:42:18.866392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.208 [2024-07-15 20:42:18.947744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.208 [2024-07-15 20:42:18.988606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.775 20:42:19 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.775 20:42:19 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:57.775 20:42:19 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:58.034 { 00:05:58.034 "version": "SPDK v24.09-pre git sha1 20d0fd684", 00:05:58.034 "fields": { 00:05:58.034 "major": 24, 00:05:58.034 "minor": 9, 00:05:58.034 "patch": 0, 00:05:58.034 "suffix": "-pre", 00:05:58.034 "commit": "20d0fd684" 00:05:58.034 } 00:05:58.034 } 00:05:58.034 20:42:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:58.034 20:42:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:58.034 20:42:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:58.034 20:42:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:58.034 20:42:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:58.034 20:42:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.034 20:42:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.034 20:42:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:58.034 20:42:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:58.034 20:42:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:58.034 20:42:19 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:58.294 request: 00:05:58.294 { 00:05:58.294 "method": "env_dpdk_get_mem_stats", 00:05:58.294 "req_id": 1 00:05:58.294 } 00:05:58.294 Got JSON-RPC error response 00:05:58.294 response: 00:05:58.294 { 00:05:58.294 "code": -32601, 00:05:58.294 "message": "Method not found" 00:05:58.294 } 00:05:58.294 20:42:19 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:58.294 20:42:19 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.294 20:42:19 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.294 20:42:19 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.294 20:42:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62201 00:05:58.294 20:42:19 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62201 ']' 00:05:58.294 20:42:19 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62201 00:05:58.294 20:42:19 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:58.294 20:42:20 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.294 20:42:20 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62201 00:05:58.294 killing process with pid 62201 00:05:58.294 20:42:20 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.294 20:42:20 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.294 20:42:20 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62201' 00:05:58.294 20:42:20 app_cmdline -- common/autotest_common.sh@967 -- # kill 62201 00:05:58.294 20:42:20 app_cmdline -- common/autotest_common.sh@972 -- # wait 62201 00:05:58.553 00:05:58.553 real 0m1.805s 00:05:58.553 user 0m2.091s 00:05:58.553 sys 0m0.450s 00:05:58.553 ************************************ 00:05:58.553 END TEST app_cmdline 00:05:58.553 ************************************ 00:05:58.553 20:42:20 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.553 20:42:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:58.553 20:42:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.553 20:42:20 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:58.553 20:42:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.553 20:42:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.553 20:42:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.553 ************************************ 00:05:58.553 START TEST version 00:05:58.553 ************************************ 00:05:58.553 20:42:20 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:58.812 * Looking for test storage... 00:05:58.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:58.812 20:42:20 version -- app/version.sh@17 -- # get_header_version major 00:05:58.812 20:42:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.812 20:42:20 version -- app/version.sh@14 -- # cut -f2 00:05:58.812 20:42:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.812 20:42:20 version -- app/version.sh@17 -- # major=24 00:05:58.812 20:42:20 version -- app/version.sh@18 -- # get_header_version minor 00:05:58.812 20:42:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.812 20:42:20 version -- app/version.sh@14 -- # cut -f2 00:05:58.812 20:42:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.812 20:42:20 version -- app/version.sh@18 -- # minor=9 00:05:58.812 20:42:20 version -- app/version.sh@19 -- # get_header_version patch 00:05:58.812 20:42:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.812 20:42:20 version -- app/version.sh@14 -- # cut -f2 00:05:58.812 20:42:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.812 20:42:20 version -- app/version.sh@19 -- # patch=0 00:05:58.812 20:42:20 version -- app/version.sh@20 -- # get_header_version suffix 00:05:58.812 20:42:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:58.812 20:42:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.812 20:42:20 version -- app/version.sh@14 -- # cut -f2 00:05:58.812 20:42:20 version -- app/version.sh@20 -- # suffix=-pre 00:05:58.812 20:42:20 version -- app/version.sh@22 -- # version=24.9 00:05:58.812 20:42:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:58.812 20:42:20 version -- app/version.sh@28 -- # version=24.9rc0 00:05:58.812 20:42:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:58.812 20:42:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:58.812 20:42:20 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:58.812 20:42:20 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:58.812 00:05:58.812 real 0m0.222s 00:05:58.812 user 0m0.125s 00:05:58.812 sys 0m0.146s 00:05:58.812 ************************************ 00:05:58.812 END TEST version 00:05:58.812 ************************************ 00:05:58.812 20:42:20 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.812 20:42:20 version -- common/autotest_common.sh@10 -- # set +x 00:05:58.812 20:42:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.812 20:42:20 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:58.812 20:42:20 -- spdk/autotest.sh@198 -- # uname -s 00:05:58.812 20:42:20 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:58.812 20:42:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:58.812 20:42:20 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:05:58.812 20:42:20 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:05:58.812 20:42:20 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:58.812 20:42:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.812 20:42:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.812 20:42:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.812 ************************************ 00:05:58.812 START TEST spdk_dd 00:05:58.812 ************************************ 00:05:58.812 20:42:20 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:59.072 * Looking for test storage... 00:05:59.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:59.072 20:42:20 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.072 20:42:20 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.072 20:42:20 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.072 20:42:20 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.072 20:42:20 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.072 20:42:20 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.072 20:42:20 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.072 20:42:20 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:59.072 20:42:20 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.072 20:42:20 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.641 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.641 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.641 20:42:21 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:59.641 20:42:21 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@230 -- # local class 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@232 -- # local progif 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@233 -- # class=01 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:05:59.641 20:42:21 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:59.641 20:42:21 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@139 -- # local lib so 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.641 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:59.642 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:59.902 * spdk_dd linked to liburing 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:59.902 20:42:21 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:59.902 20:42:21 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:59.903 20:42:21 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:05:59.903 20:42:21 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:59.903 20:42:21 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:05:59.903 20:42:21 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:05:59.903 20:42:21 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:05:59.903 20:42:21 spdk_dd -- dd/common.sh@157 -- # return 0 00:05:59.903 20:42:21 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:59.903 20:42:21 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:59.903 20:42:21 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:59.903 20:42:21 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.903 20:42:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:59.903 ************************************ 00:05:59.903 START TEST spdk_dd_basic_rw 00:05:59.903 ************************************ 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:59.903 * Looking for test storage... 00:05:59.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:59.903 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:00.166 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:00.166 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:00.166 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:00.166 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:00.166 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:00.166 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.167 ************************************ 00:06:00.167 START TEST dd_bs_lt_native_bs 00:06:00.167 ************************************ 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:00.167 20:42:21 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:00.167 { 00:06:00.167 "subsystems": [ 00:06:00.167 { 00:06:00.167 "subsystem": "bdev", 00:06:00.167 "config": [ 00:06:00.167 { 00:06:00.167 "params": { 00:06:00.167 "trtype": "pcie", 00:06:00.167 "traddr": "0000:00:10.0", 00:06:00.167 "name": "Nvme0" 00:06:00.167 }, 00:06:00.167 "method": "bdev_nvme_attach_controller" 00:06:00.167 }, 00:06:00.167 { 00:06:00.167 "method": "bdev_wait_for_examine" 00:06:00.167 } 00:06:00.167 ] 00:06:00.167 } 00:06:00.167 ] 00:06:00.167 } 00:06:00.167 [2024-07-15 20:42:22.010915] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:00.167 [2024-07-15 20:42:22.011032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62531 ] 00:06:00.426 [2024-07-15 20:42:22.157217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.426 [2024-07-15 20:42:22.240018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.426 [2024-07-15 20:42:22.282399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.685 [2024-07-15 20:42:22.380571] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:00.685 [2024-07-15 20:42:22.380636] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.685 [2024-07-15 20:42:22.478538] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:00.685 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:00.685 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.685 ************************************ 00:06:00.685 END TEST dd_bs_lt_native_bs 00:06:00.685 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:00.685 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:00.685 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:00.685 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.685 00:06:00.685 real 0m0.621s 00:06:00.685 user 0m0.395s 00:06:00.685 sys 0m0.174s 00:06:00.685 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.685 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:00.685 ************************************ 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.945 ************************************ 00:06:00.945 START TEST dd_rw 00:06:00.945 ************************************ 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:00.945 20:42:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.514 20:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:01.514 20:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:01.514 20:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:01.515 20:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.515 [2024-07-15 20:42:23.183277] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:01.515 [2024-07-15 20:42:23.183669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62563 ] 00:06:01.515 { 00:06:01.515 "subsystems": [ 00:06:01.515 { 00:06:01.515 "subsystem": "bdev", 00:06:01.515 "config": [ 00:06:01.515 { 00:06:01.515 "params": { 00:06:01.515 "trtype": "pcie", 00:06:01.515 "traddr": "0000:00:10.0", 00:06:01.515 "name": "Nvme0" 00:06:01.515 }, 00:06:01.515 "method": "bdev_nvme_attach_controller" 00:06:01.515 }, 00:06:01.515 { 00:06:01.515 "method": "bdev_wait_for_examine" 00:06:01.515 } 00:06:01.515 ] 00:06:01.515 } 00:06:01.515 ] 00:06:01.515 } 00:06:01.515 [2024-07-15 20:42:23.323298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.515 [2024-07-15 20:42:23.401567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.773 [2024-07-15 20:42:23.443257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.032  Copying: 60/60 [kB] (average 19 MBps) 00:06:02.032 00:06:02.032 20:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:02.032 20:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:02.032 20:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.032 20:42:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.032 [2024-07-15 20:42:23.779861] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:02.032 [2024-07-15 20:42:23.779934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62571 ] 00:06:02.032 { 00:06:02.032 "subsystems": [ 00:06:02.032 { 00:06:02.032 "subsystem": "bdev", 00:06:02.032 "config": [ 00:06:02.032 { 00:06:02.032 "params": { 00:06:02.032 "trtype": "pcie", 00:06:02.032 "traddr": "0000:00:10.0", 00:06:02.032 "name": "Nvme0" 00:06:02.032 }, 00:06:02.032 "method": "bdev_nvme_attach_controller" 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "method": "bdev_wait_for_examine" 00:06:02.032 } 00:06:02.032 ] 00:06:02.032 } 00:06:02.032 ] 00:06:02.032 } 00:06:02.032 [2024-07-15 20:42:23.920782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.290 [2024-07-15 20:42:23.997927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.290 [2024-07-15 20:42:24.039771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.548  Copying: 60/60 [kB] (average 19 MBps) 00:06:02.548 00:06:02.548 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.548 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:02.548 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:02.548 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:02.548 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:02.548 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:02.548 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:02.548 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:02.548 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:02.548 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.548 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.548 [2024-07-15 20:42:24.375719] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:02.548 [2024-07-15 20:42:24.375891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62592 ] 00:06:02.548 { 00:06:02.548 "subsystems": [ 00:06:02.548 { 00:06:02.548 "subsystem": "bdev", 00:06:02.548 "config": [ 00:06:02.548 { 00:06:02.548 "params": { 00:06:02.548 "trtype": "pcie", 00:06:02.548 "traddr": "0000:00:10.0", 00:06:02.548 "name": "Nvme0" 00:06:02.548 }, 00:06:02.548 "method": "bdev_nvme_attach_controller" 00:06:02.548 }, 00:06:02.548 { 00:06:02.548 "method": "bdev_wait_for_examine" 00:06:02.548 } 00:06:02.548 ] 00:06:02.548 } 00:06:02.548 ] 00:06:02.548 } 00:06:02.807 [2024-07-15 20:42:24.516859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.807 [2024-07-15 20:42:24.601638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.807 [2024-07-15 20:42:24.643277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.067  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:03.067 00:06:03.067 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:03.067 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:03.067 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:03.067 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:03.067 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:03.067 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:03.067 20:42:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.635 20:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:03.635 20:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:03.635 20:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.635 20:42:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.635 [2024-07-15 20:42:25.461720] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:03.635 [2024-07-15 20:42:25.461954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62611 ] 00:06:03.635 { 00:06:03.635 "subsystems": [ 00:06:03.635 { 00:06:03.635 "subsystem": "bdev", 00:06:03.635 "config": [ 00:06:03.635 { 00:06:03.635 "params": { 00:06:03.635 "trtype": "pcie", 00:06:03.635 "traddr": "0000:00:10.0", 00:06:03.635 "name": "Nvme0" 00:06:03.635 }, 00:06:03.635 "method": "bdev_nvme_attach_controller" 00:06:03.635 }, 00:06:03.635 { 00:06:03.635 "method": "bdev_wait_for_examine" 00:06:03.635 } 00:06:03.635 ] 00:06:03.635 } 00:06:03.635 ] 00:06:03.635 } 00:06:03.894 [2024-07-15 20:42:25.601904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.894 [2024-07-15 20:42:25.697163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.894 [2024-07-15 20:42:25.738526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.152  Copying: 60/60 [kB] (average 58 MBps) 00:06:04.152 00:06:04.152 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:04.152 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:04.152 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.152 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.410 [2024-07-15 20:42:26.068907] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:04.410 [2024-07-15 20:42:26.068972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62625 ] 00:06:04.410 { 00:06:04.410 "subsystems": [ 00:06:04.410 { 00:06:04.410 "subsystem": "bdev", 00:06:04.410 "config": [ 00:06:04.410 { 00:06:04.410 "params": { 00:06:04.410 "trtype": "pcie", 00:06:04.410 "traddr": "0000:00:10.0", 00:06:04.410 "name": "Nvme0" 00:06:04.410 }, 00:06:04.410 "method": "bdev_nvme_attach_controller" 00:06:04.410 }, 00:06:04.410 { 00:06:04.410 "method": "bdev_wait_for_examine" 00:06:04.410 } 00:06:04.410 ] 00:06:04.410 } 00:06:04.410 ] 00:06:04.410 } 00:06:04.410 [2024-07-15 20:42:26.209685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.410 [2024-07-15 20:42:26.303819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.668 [2024-07-15 20:42:26.345063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.926  Copying: 60/60 [kB] (average 29 MBps) 00:06:04.926 00:06:04.926 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.926 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:04.926 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:04.926 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:04.926 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:04.926 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:04.926 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:04.926 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:04.926 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:04.926 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.926 20:42:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.926 [2024-07-15 20:42:26.677200] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:04.926 [2024-07-15 20:42:26.677264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62640 ] 00:06:04.926 { 00:06:04.926 "subsystems": [ 00:06:04.926 { 00:06:04.926 "subsystem": "bdev", 00:06:04.926 "config": [ 00:06:04.926 { 00:06:04.926 "params": { 00:06:04.926 "trtype": "pcie", 00:06:04.926 "traddr": "0000:00:10.0", 00:06:04.926 "name": "Nvme0" 00:06:04.926 }, 00:06:04.926 "method": "bdev_nvme_attach_controller" 00:06:04.926 }, 00:06:04.926 { 00:06:04.926 "method": "bdev_wait_for_examine" 00:06:04.926 } 00:06:04.926 ] 00:06:04.926 } 00:06:04.926 ] 00:06:04.926 } 00:06:04.926 [2024-07-15 20:42:26.817953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.184 [2024-07-15 20:42:26.903393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.184 [2024-07-15 20:42:26.944684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.442  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:05.442 00:06:05.442 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:05.442 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:05.442 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:05.442 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:05.442 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:05.442 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:05.442 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:05.442 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.022 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:06.022 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:06.022 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:06.022 20:42:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.022 [2024-07-15 20:42:27.731555] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:06.022 [2024-07-15 20:42:27.731626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62659 ] 00:06:06.022 { 00:06:06.022 "subsystems": [ 00:06:06.022 { 00:06:06.022 "subsystem": "bdev", 00:06:06.022 "config": [ 00:06:06.022 { 00:06:06.022 "params": { 00:06:06.022 "trtype": "pcie", 00:06:06.022 "traddr": "0000:00:10.0", 00:06:06.022 "name": "Nvme0" 00:06:06.022 }, 00:06:06.022 "method": "bdev_nvme_attach_controller" 00:06:06.022 }, 00:06:06.022 { 00:06:06.022 "method": "bdev_wait_for_examine" 00:06:06.022 } 00:06:06.022 ] 00:06:06.022 } 00:06:06.022 ] 00:06:06.022 } 00:06:06.022 [2024-07-15 20:42:27.872712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.280 [2024-07-15 20:42:27.943853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.280 [2024-07-15 20:42:27.985616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.538  Copying: 56/56 [kB] (average 27 MBps) 00:06:06.538 00:06:06.538 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:06.538 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:06.538 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:06.538 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.538 [2024-07-15 20:42:28.299740] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:06.538 [2024-07-15 20:42:28.299803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62673 ] 00:06:06.538 { 00:06:06.538 "subsystems": [ 00:06:06.538 { 00:06:06.538 "subsystem": "bdev", 00:06:06.538 "config": [ 00:06:06.538 { 00:06:06.538 "params": { 00:06:06.538 "trtype": "pcie", 00:06:06.538 "traddr": "0000:00:10.0", 00:06:06.538 "name": "Nvme0" 00:06:06.538 }, 00:06:06.538 "method": "bdev_nvme_attach_controller" 00:06:06.538 }, 00:06:06.538 { 00:06:06.538 "method": "bdev_wait_for_examine" 00:06:06.538 } 00:06:06.538 ] 00:06:06.538 } 00:06:06.538 ] 00:06:06.538 } 00:06:06.538 [2024-07-15 20:42:28.439270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.797 [2024-07-15 20:42:28.521480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.797 [2024-07-15 20:42:28.563157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.055  Copying: 56/56 [kB] (average 27 MBps) 00:06:07.055 00:06:07.055 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.055 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:07.055 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:07.055 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:07.055 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:07.055 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:07.055 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:07.055 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:07.055 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:07.055 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:07.055 20:42:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.055 [2024-07-15 20:42:28.897717] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:07.055 [2024-07-15 20:42:28.897789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62688 ] 00:06:07.055 { 00:06:07.055 "subsystems": [ 00:06:07.055 { 00:06:07.055 "subsystem": "bdev", 00:06:07.055 "config": [ 00:06:07.055 { 00:06:07.055 "params": { 00:06:07.055 "trtype": "pcie", 00:06:07.055 "traddr": "0000:00:10.0", 00:06:07.055 "name": "Nvme0" 00:06:07.055 }, 00:06:07.055 "method": "bdev_nvme_attach_controller" 00:06:07.055 }, 00:06:07.055 { 00:06:07.055 "method": "bdev_wait_for_examine" 00:06:07.055 } 00:06:07.055 ] 00:06:07.055 } 00:06:07.055 ] 00:06:07.055 } 00:06:07.314 [2024-07-15 20:42:29.037504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.314 [2024-07-15 20:42:29.135120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.314 [2024-07-15 20:42:29.177044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.573  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:07.573 00:06:07.573 20:42:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:07.573 20:42:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:07.573 20:42:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:07.573 20:42:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:07.573 20:42:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:07.573 20:42:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:07.573 20:42:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.142 20:42:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:08.142 20:42:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:08.142 20:42:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:08.142 20:42:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.142 [2024-07-15 20:42:29.955773] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:08.142 [2024-07-15 20:42:29.955849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62707 ] 00:06:08.142 { 00:06:08.142 "subsystems": [ 00:06:08.142 { 00:06:08.142 "subsystem": "bdev", 00:06:08.142 "config": [ 00:06:08.142 { 00:06:08.142 "params": { 00:06:08.142 "trtype": "pcie", 00:06:08.142 "traddr": "0000:00:10.0", 00:06:08.142 "name": "Nvme0" 00:06:08.142 }, 00:06:08.142 "method": "bdev_nvme_attach_controller" 00:06:08.142 }, 00:06:08.142 { 00:06:08.142 "method": "bdev_wait_for_examine" 00:06:08.142 } 00:06:08.142 ] 00:06:08.142 } 00:06:08.142 ] 00:06:08.142 } 00:06:08.400 [2024-07-15 20:42:30.094744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.400 [2024-07-15 20:42:30.176390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.400 [2024-07-15 20:42:30.217510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.659  Copying: 56/56 [kB] (average 54 MBps) 00:06:08.659 00:06:08.659 20:42:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:08.659 20:42:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:08.659 20:42:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:08.659 20:42:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.659 [2024-07-15 20:42:30.541713] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:08.659 [2024-07-15 20:42:30.541776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62726 ] 00:06:08.659 { 00:06:08.659 "subsystems": [ 00:06:08.659 { 00:06:08.659 "subsystem": "bdev", 00:06:08.659 "config": [ 00:06:08.659 { 00:06:08.659 "params": { 00:06:08.659 "trtype": "pcie", 00:06:08.659 "traddr": "0000:00:10.0", 00:06:08.659 "name": "Nvme0" 00:06:08.659 }, 00:06:08.659 "method": "bdev_nvme_attach_controller" 00:06:08.659 }, 00:06:08.659 { 00:06:08.659 "method": "bdev_wait_for_examine" 00:06:08.659 } 00:06:08.659 ] 00:06:08.659 } 00:06:08.659 ] 00:06:08.659 } 00:06:08.918 [2024-07-15 20:42:30.682453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.918 [2024-07-15 20:42:30.767382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.918 [2024-07-15 20:42:30.808768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.177  Copying: 56/56 [kB] (average 54 MBps) 00:06:09.177 00:06:09.436 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.436 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:09.436 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:09.436 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:09.436 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:09.436 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:09.436 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:09.436 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:09.436 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:09.436 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.436 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.436 [2024-07-15 20:42:31.141029] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:09.436 [2024-07-15 20:42:31.141092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62736 ] 00:06:09.436 { 00:06:09.436 "subsystems": [ 00:06:09.436 { 00:06:09.436 "subsystem": "bdev", 00:06:09.436 "config": [ 00:06:09.436 { 00:06:09.436 "params": { 00:06:09.436 "trtype": "pcie", 00:06:09.436 "traddr": "0000:00:10.0", 00:06:09.436 "name": "Nvme0" 00:06:09.436 }, 00:06:09.436 "method": "bdev_nvme_attach_controller" 00:06:09.436 }, 00:06:09.436 { 00:06:09.436 "method": "bdev_wait_for_examine" 00:06:09.436 } 00:06:09.436 ] 00:06:09.436 } 00:06:09.436 ] 00:06:09.436 } 00:06:09.436 [2024-07-15 20:42:31.281315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.695 [2024-07-15 20:42:31.363357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.695 [2024-07-15 20:42:31.404786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.955  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:09.955 00:06:09.955 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:09.955 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:09.955 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:09.955 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:09.955 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:09.955 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:09.955 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:09.955 20:42:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.213 20:42:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:10.213 20:42:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:10.213 20:42:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.213 20:42:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.472 [2024-07-15 20:42:32.125732] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:10.472 [2024-07-15 20:42:32.125807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62755 ] 00:06:10.472 { 00:06:10.472 "subsystems": [ 00:06:10.472 { 00:06:10.472 "subsystem": "bdev", 00:06:10.472 "config": [ 00:06:10.472 { 00:06:10.472 "params": { 00:06:10.472 "trtype": "pcie", 00:06:10.472 "traddr": "0000:00:10.0", 00:06:10.472 "name": "Nvme0" 00:06:10.472 }, 00:06:10.472 "method": "bdev_nvme_attach_controller" 00:06:10.472 }, 00:06:10.472 { 00:06:10.472 "method": "bdev_wait_for_examine" 00:06:10.472 } 00:06:10.472 ] 00:06:10.472 } 00:06:10.472 ] 00:06:10.472 } 00:06:10.472 [2024-07-15 20:42:32.265927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.472 [2024-07-15 20:42:32.355885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.731 [2024-07-15 20:42:32.397773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.989  Copying: 48/48 [kB] (average 46 MBps) 00:06:10.989 00:06:10.989 20:42:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:10.989 20:42:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:10.989 20:42:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.989 20:42:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.989 [2024-07-15 20:42:32.723751] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:10.989 [2024-07-15 20:42:32.723976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62774 ] 00:06:10.989 { 00:06:10.989 "subsystems": [ 00:06:10.989 { 00:06:10.989 "subsystem": "bdev", 00:06:10.989 "config": [ 00:06:10.989 { 00:06:10.989 "params": { 00:06:10.989 "trtype": "pcie", 00:06:10.989 "traddr": "0000:00:10.0", 00:06:10.989 "name": "Nvme0" 00:06:10.989 }, 00:06:10.989 "method": "bdev_nvme_attach_controller" 00:06:10.989 }, 00:06:10.989 { 00:06:10.989 "method": "bdev_wait_for_examine" 00:06:10.989 } 00:06:10.989 ] 00:06:10.989 } 00:06:10.989 ] 00:06:10.989 } 00:06:10.989 [2024-07-15 20:42:32.864307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.247 [2024-07-15 20:42:32.953871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.247 [2024-07-15 20:42:32.995745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.514  Copying: 48/48 [kB] (average 46 MBps) 00:06:11.514 00:06:11.514 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.514 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:11.514 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:11.514 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:11.514 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:11.514 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:11.514 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:11.514 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:11.514 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:11.514 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.514 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.514 [2024-07-15 20:42:33.322808] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:11.514 [2024-07-15 20:42:33.322875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62790 ] 00:06:11.514 { 00:06:11.514 "subsystems": [ 00:06:11.514 { 00:06:11.514 "subsystem": "bdev", 00:06:11.514 "config": [ 00:06:11.514 { 00:06:11.514 "params": { 00:06:11.514 "trtype": "pcie", 00:06:11.514 "traddr": "0000:00:10.0", 00:06:11.514 "name": "Nvme0" 00:06:11.514 }, 00:06:11.514 "method": "bdev_nvme_attach_controller" 00:06:11.514 }, 00:06:11.514 { 00:06:11.514 "method": "bdev_wait_for_examine" 00:06:11.514 } 00:06:11.514 ] 00:06:11.514 } 00:06:11.514 ] 00:06:11.514 } 00:06:11.772 [2024-07-15 20:42:33.462029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.772 [2024-07-15 20:42:33.546877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.772 [2024-07-15 20:42:33.588490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.031  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:12.031 00:06:12.031 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:12.031 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:12.031 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:12.031 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:12.031 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:12.031 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:12.031 20:42:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.598 20:42:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:12.598 20:42:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:12.598 20:42:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.598 20:42:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.598 [2024-07-15 20:42:34.297205] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:12.598 [2024-07-15 20:42:34.297275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62809 ] 00:06:12.598 { 00:06:12.598 "subsystems": [ 00:06:12.598 { 00:06:12.598 "subsystem": "bdev", 00:06:12.598 "config": [ 00:06:12.598 { 00:06:12.598 "params": { 00:06:12.598 "trtype": "pcie", 00:06:12.598 "traddr": "0000:00:10.0", 00:06:12.598 "name": "Nvme0" 00:06:12.598 }, 00:06:12.598 "method": "bdev_nvme_attach_controller" 00:06:12.598 }, 00:06:12.598 { 00:06:12.598 "method": "bdev_wait_for_examine" 00:06:12.598 } 00:06:12.598 ] 00:06:12.598 } 00:06:12.598 ] 00:06:12.598 } 00:06:12.598 [2024-07-15 20:42:34.434673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.887 [2024-07-15 20:42:34.524463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.887 [2024-07-15 20:42:34.572673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.170  Copying: 48/48 [kB] (average 46 MBps) 00:06:13.170 00:06:13.170 20:42:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:13.170 20:42:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:13.170 20:42:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.170 20:42:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.170 { 00:06:13.170 "subsystems": [ 00:06:13.170 { 00:06:13.170 "subsystem": "bdev", 00:06:13.170 "config": [ 00:06:13.170 { 00:06:13.170 "params": { 00:06:13.170 "trtype": "pcie", 00:06:13.170 "traddr": "0000:00:10.0", 00:06:13.170 "name": "Nvme0" 00:06:13.170 }, 00:06:13.170 "method": "bdev_nvme_attach_controller" 00:06:13.170 }, 00:06:13.170 { 00:06:13.170 "method": "bdev_wait_for_examine" 00:06:13.170 } 00:06:13.170 ] 00:06:13.170 } 00:06:13.170 ] 00:06:13.170 } 00:06:13.170 [2024-07-15 20:42:34.905375] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:13.170 [2024-07-15 20:42:34.905444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62822 ] 00:06:13.170 [2024-07-15 20:42:35.045083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.428 [2024-07-15 20:42:35.125481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.428 [2024-07-15 20:42:35.166995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.686  Copying: 48/48 [kB] (average 46 MBps) 00:06:13.686 00:06:13.686 20:42:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.686 20:42:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:13.686 20:42:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:13.686 20:42:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:13.686 20:42:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:13.686 20:42:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:13.686 20:42:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:13.686 20:42:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:13.686 20:42:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:13.686 20:42:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.686 20:42:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.686 [2024-07-15 20:42:35.493992] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:13.686 [2024-07-15 20:42:35.494056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62838 ] 00:06:13.686 { 00:06:13.686 "subsystems": [ 00:06:13.686 { 00:06:13.686 "subsystem": "bdev", 00:06:13.686 "config": [ 00:06:13.687 { 00:06:13.687 "params": { 00:06:13.687 "trtype": "pcie", 00:06:13.687 "traddr": "0000:00:10.0", 00:06:13.687 "name": "Nvme0" 00:06:13.687 }, 00:06:13.687 "method": "bdev_nvme_attach_controller" 00:06:13.687 }, 00:06:13.687 { 00:06:13.687 "method": "bdev_wait_for_examine" 00:06:13.687 } 00:06:13.687 ] 00:06:13.687 } 00:06:13.687 ] 00:06:13.687 } 00:06:13.945 [2024-07-15 20:42:35.634696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.945 [2024-07-15 20:42:35.713800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.945 [2024-07-15 20:42:35.755286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.202  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:14.202 00:06:14.202 ************************************ 00:06:14.202 END TEST dd_rw 00:06:14.202 ************************************ 00:06:14.202 00:06:14.202 real 0m13.399s 00:06:14.202 user 0m9.603s 00:06:14.202 sys 0m4.811s 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.202 ************************************ 00:06:14.202 START TEST dd_rw_offset 00:06:14.202 ************************************ 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:14.202 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:14.460 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:14.461 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=odtwlno85zfbedpwqzvivgtpydnriyoa22kcxofk8dnurfatbe8yjzm9u799led0jv0w1txdbpbtuo1t6s19sou2yqne714ov21y8i6drd65sgeanpysk2w32goeitgf1y44aitjnq6kwcrcnpbcgj0ndrzohjydfea06josbzrmjubq474kdrziaagw4lp2qx4dokxu0qh6dnm50kft6n6npxhp7xn45310utpl4ejtvg5d2dc8liqrgyxeq3ow1ilc6i1w3ecbne8wrc6vd3uqwa2fuy5md1g6803lwztqm012g97unigjvohge10kefelt2yurz4n5cbdprzwt8wr3cqk84352kd6pb979dj4hcq2ihh9e19sb591t41kiv87kjo717a0gzinssjr40tf0wm0xuj6fh01au57cp28svbo47idlgc81j83gnyj49s20di22m9wq7mv4wd1fiq1rq6aq69jvtmof0a93vtw575wmm2sanh0jyir1tajaxmalum2jkcfak98xorpttwxhm1kbuhwjby0runp0tn3de5l4iyu0168as79ajffw928trvkgad0riokhk6kfdqto6zo92qiq2nskins9u4526qtizeec8jvkmuzw9x6j5lh5ekjv0zzpuw9cbf98yj3roijp414jno4tw6t4yog12om8zkhido999krso8zwvurabpgkzqzktxpbpwwm7uu8ofsa0h7gz5x8fuqnib4w7jkklmhdbnc21ffnebbt72k74f2be59iav4lf3cy7q2bhpejlep6g912t3drov4lahhuzfb4townhno7x9aorugp6t4oostw7u34hqys5wxlkjkcsgygsrjx1c0twr5vst0fw4y9cf5h4hqjnr4p5m7c3q6cojbaglpuw8ly252ru99bn7jakx65ovbe8eo2xtcunfyet1afv5rop239j29964ax1bgwtkuyo48halqyfsat4u828aywdn55gevb4u5uvcfb62ivntk2yeqx71i2sxj26hutuhl52e672v4fohpr5una72guqwetpeg7g8nwbatxckztd4zv288rjd918ewu74djzeg8e0rcx4px7izkp1v6fs6flp5kfop9ru3js8aprkk8sjh3izs6ezj9p636qe3ou35r4f63z65z8ozegipyqzs3n3uekoo8dng7rqt0gv9gkya52saz2l0i3xv7tvebkln3d236f3433hza4458s8gs73f6t8kfhaazdznyuw93eoqunhia077v4cv2budilldxwo3366fhvn62jyiahc5kokfs290v4gpswqm1fzkx3e7oxkc842bcuxwuxrm5t55ljgu0gnyqsaq6ofmrtxk879tqbt84nahx18roj7fdxbvb9tf7udojejgdai8chg0sf9mbq7qcy55qjewtdk9ffs2uq2rxqstpddh3c096eerou7n1xdtuirh22v57exjmc0fvyum473dg5m5by96fkflho8bkimip14bb722ylreo6lprznrwdrk99bcmq28pmcirhqoy609ls2ngs9ul1pwwpcswazct02h86fousuv97idfere3p036rxcfa8dr8a344epgktlqdku8hxtbttyeyrnfqvpcaai3jbf8q3tsq6oy73y5sq4xcpb1z6isl2j8fp4eczz6y9oefqpl6bdmk41ghsba2n0h6gtmigp9alsjukzsx1ppecbq7gi4c14xecax7zabapvw0u0qnwy7kcio694z2ba8qanh2c1xi6q0jscurbvk4l48jiaayiebkc4peolwlqzc6ptujt0ks3hyryx1cu5oxmxzzfri0ywnp1422oq68hzc990hg3j2x569lw6hkd9zylnf45vsu5s8vjchoixena4kowsr4xowyt8wuiir5znpgltwstfqfozpvnyn0p7m86blew78fxic45u5aznop7pd5ifprohyfs5ikrsrg9ptq56o3jiqsfa9rjdxvkp4wnozosh07sho7wt2pgnknmj4m37non9tuv1xqjmwfvu02iqdblbvd1eej35jnpn1koy9kgurcpcuy4a28dedjgfhfoj4tk05fiqsoa9kschvjhibi5ome217mqu6pywdc3gvxbdbhd8vdla1ip2h2we838xeqjrzs11hx61jyyjbnytbjcmyx0o6kmyg0kumytwdv95at1ekx3h6b3fuwjutxsjn7auq60dtrtxeyt825egfi7wu6a7kpk9fjj793mlxyuovjn47q76o5mf6u5ojvuk107ye4wvy8egauumtkht00xw2dfp328ypbbqdhmv3oag1t0mlwbu5xfr9vu962nw4sh9j5prfxltiv735cyssvo3tliarzgy2aak7du9eez3vkrj0w2yz6tlws0a5zcmvaqx8r96haqe1qpi1xnzqkef65vu8a7g37h8zhgtyqoauglmk3zlyv3gprhutb9efzfyacbkjo4xmecu7k2n6lk5cb99awf5m6zbruv5jgjmovhp2x6oz2lg9lulceehptk005nn7yjpij2o07rbvgkplp9vwbt205y4fgriyzl5q4j8oaogjnf7lb464ujytj9ahfeff0282nu3isdcyjsfwzi0juqgnotaq2e29kfrttwxx2213fyy5xle2agseh4l1d6o623t9xxozljke7dzh0w9syo7nn5grhjp9dcot0kqehe9fax6ffgyyxr7cd9qdf9f0csu6g9v1ex24iaya1b4q88p7s5cujn08eqkkybvpw6wjgfsh5xax7heldmewcn8fhqtg93tadb1v3llfee6434pvpiwk1x5yb086i3nx0kqr1qkkp2gl7hi7zs1ujpzy3prymf2waw2qugnhzdx3l0bsr0aohnol3wvd10gonhwph9gfjppx32m7yh57iciv4z0zs7um0etmn1ezx6npcb2zt0sady4po3329o9hflr9wdt9574qq1m6qb5r9hy2fgvt0e0jeqxhz6f3lhmn8vi6skrcozcqce0wo7r5e21ctoodf1tda1ssbqs8idlr91y3xoi682q98h2q4gw0y5pxy0mpusnmd81f1nqwmhi1gj9ue4rsem5s791crsqzq5s7ra8xyjo61c3br6gds68gw3pcpa4r3cph4e97wsxme284yhgtat4asbbf8gq718tkrf4kx3824xtwbhlcgfq536s4zwkrzww329xois209dff1u51q9bgzw4o80vvc909bdooasgt2bcf83l8br94quo84i6x07gnlio4o0dxvjjvikd0kq5irfkto5a3r0zgzl28ca8e9dfi3e24zvymc3ixe4lxmnqawh94y09627fgwyj4l7d6h6lxfu5mmrm4vr4b59up1i3m5yh5j3zv84gmaair25yt70j6cqxcj550l76oprgg00lbmb87bwub94jimckk784mo1le7r7uge5m12ymqc94h2ukxaa6p1eaamyeu1mwp7knfedbovvukaeb1vjtv3skqag67y473k1y0lvfp3guptug8i1qzbaukydflrqrki2pajrrw4m4v43uzdi79uwtectozzjw6gq0k9j3gc7lczpsuxw1wg1h56ayvp21uqipsjsvuksqn2yrxklalf9kvx1pe7qtqogcrwnj41gpvlak631blan03d8cpk9cg6uriukvd2hi7coxwcjkb79hbh1y3ueggmam5tdfb5tbm73ior2qrtyjh9rgm2whdrc4i50pxhq113drtdljo07m83jqxspvmzei11tloo8p8mlrhr53ri8mvgp43lq22ghbigcvpuiirg41559gn6zz5sggpc6bq7x8lwn6ed4s7mxbwwr02jnvusf1zu5xhkma9h3gloo9fziybv4qf6ybpn5z4jnwd8rbw7odrqqn76z068lqn98t7zcddt11o1anfoyymi3mf4kgam6lc7234u78fd2qpdqud3jiv0x6629hgrpwve8tasokrvzp36d3joke6sdzswim0thnvkbnlknxk3g496vksok2b910r0cjlra3st8af2btqnqfvyd 00:06:14.461 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:14.461 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:14.461 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:14.461 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:14.461 [2024-07-15 20:42:36.203490] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:14.461 [2024-07-15 20:42:36.203557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62868 ] 00:06:14.461 { 00:06:14.461 "subsystems": [ 00:06:14.461 { 00:06:14.461 "subsystem": "bdev", 00:06:14.461 "config": [ 00:06:14.461 { 00:06:14.461 "params": { 00:06:14.461 "trtype": "pcie", 00:06:14.461 "traddr": "0000:00:10.0", 00:06:14.461 "name": "Nvme0" 00:06:14.461 }, 00:06:14.461 "method": "bdev_nvme_attach_controller" 00:06:14.461 }, 00:06:14.461 { 00:06:14.461 "method": "bdev_wait_for_examine" 00:06:14.461 } 00:06:14.461 ] 00:06:14.461 } 00:06:14.461 ] 00:06:14.461 } 00:06:14.461 [2024-07-15 20:42:36.343572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.720 [2024-07-15 20:42:36.427417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.720 [2024-07-15 20:42:36.468909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.978  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:14.978 00:06:14.978 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:14.978 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:14.978 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:14.978 20:42:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:14.978 [2024-07-15 20:42:36.785128] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:14.978 [2024-07-15 20:42:36.785213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62884 ] 00:06:14.978 { 00:06:14.978 "subsystems": [ 00:06:14.978 { 00:06:14.978 "subsystem": "bdev", 00:06:14.978 "config": [ 00:06:14.978 { 00:06:14.978 "params": { 00:06:14.978 "trtype": "pcie", 00:06:14.978 "traddr": "0000:00:10.0", 00:06:14.978 "name": "Nvme0" 00:06:14.978 }, 00:06:14.978 "method": "bdev_nvme_attach_controller" 00:06:14.978 }, 00:06:14.978 { 00:06:14.978 "method": "bdev_wait_for_examine" 00:06:14.978 } 00:06:14.978 ] 00:06:14.978 } 00:06:14.978 ] 00:06:14.978 } 00:06:15.237 [2024-07-15 20:42:36.925619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.237 [2024-07-15 20:42:37.012347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.237 [2024-07-15 20:42:37.054132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.497  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:15.497 00:06:15.497 20:42:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:15.497 ************************************ 00:06:15.497 END TEST dd_rw_offset 00:06:15.497 ************************************ 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ odtwlno85zfbedpwqzvivgtpydnriyoa22kcxofk8dnurfatbe8yjzm9u799led0jv0w1txdbpbtuo1t6s19sou2yqne714ov21y8i6drd65sgeanpysk2w32goeitgf1y44aitjnq6kwcrcnpbcgj0ndrzohjydfea06josbzrmjubq474kdrziaagw4lp2qx4dokxu0qh6dnm50kft6n6npxhp7xn45310utpl4ejtvg5d2dc8liqrgyxeq3ow1ilc6i1w3ecbne8wrc6vd3uqwa2fuy5md1g6803lwztqm012g97unigjvohge10kefelt2yurz4n5cbdprzwt8wr3cqk84352kd6pb979dj4hcq2ihh9e19sb591t41kiv87kjo717a0gzinssjr40tf0wm0xuj6fh01au57cp28svbo47idlgc81j83gnyj49s20di22m9wq7mv4wd1fiq1rq6aq69jvtmof0a93vtw575wmm2sanh0jyir1tajaxmalum2jkcfak98xorpttwxhm1kbuhwjby0runp0tn3de5l4iyu0168as79ajffw928trvkgad0riokhk6kfdqto6zo92qiq2nskins9u4526qtizeec8jvkmuzw9x6j5lh5ekjv0zzpuw9cbf98yj3roijp414jno4tw6t4yog12om8zkhido999krso8zwvurabpgkzqzktxpbpwwm7uu8ofsa0h7gz5x8fuqnib4w7jkklmhdbnc21ffnebbt72k74f2be59iav4lf3cy7q2bhpejlep6g912t3drov4lahhuzfb4townhno7x9aorugp6t4oostw7u34hqys5wxlkjkcsgygsrjx1c0twr5vst0fw4y9cf5h4hqjnr4p5m7c3q6cojbaglpuw8ly252ru99bn7jakx65ovbe8eo2xtcunfyet1afv5rop239j29964ax1bgwtkuyo48halqyfsat4u828aywdn55gevb4u5uvcfb62ivntk2yeqx71i2sxj26hutuhl52e672v4fohpr5una72guqwetpeg7g8nwbatxckztd4zv288rjd918ewu74djzeg8e0rcx4px7izkp1v6fs6flp5kfop9ru3js8aprkk8sjh3izs6ezj9p636qe3ou35r4f63z65z8ozegipyqzs3n3uekoo8dng7rqt0gv9gkya52saz2l0i3xv7tvebkln3d236f3433hza4458s8gs73f6t8kfhaazdznyuw93eoqunhia077v4cv2budilldxwo3366fhvn62jyiahc5kokfs290v4gpswqm1fzkx3e7oxkc842bcuxwuxrm5t55ljgu0gnyqsaq6ofmrtxk879tqbt84nahx18roj7fdxbvb9tf7udojejgdai8chg0sf9mbq7qcy55qjewtdk9ffs2uq2rxqstpddh3c096eerou7n1xdtuirh22v57exjmc0fvyum473dg5m5by96fkflho8bkimip14bb722ylreo6lprznrwdrk99bcmq28pmcirhqoy609ls2ngs9ul1pwwpcswazct02h86fousuv97idfere3p036rxcfa8dr8a344epgktlqdku8hxtbttyeyrnfqvpcaai3jbf8q3tsq6oy73y5sq4xcpb1z6isl2j8fp4eczz6y9oefqpl6bdmk41ghsba2n0h6gtmigp9alsjukzsx1ppecbq7gi4c14xecax7zabapvw0u0qnwy7kcio694z2ba8qanh2c1xi6q0jscurbvk4l48jiaayiebkc4peolwlqzc6ptujt0ks3hyryx1cu5oxmxzzfri0ywnp1422oq68hzc990hg3j2x569lw6hkd9zylnf45vsu5s8vjchoixena4kowsr4xowyt8wuiir5znpgltwstfqfozpvnyn0p7m86blew78fxic45u5aznop7pd5ifprohyfs5ikrsrg9ptq56o3jiqsfa9rjdxvkp4wnozosh07sho7wt2pgnknmj4m37non9tuv1xqjmwfvu02iqdblbvd1eej35jnpn1koy9kgurcpcuy4a28dedjgfhfoj4tk05fiqsoa9kschvjhibi5ome217mqu6pywdc3gvxbdbhd8vdla1ip2h2we838xeqjrzs11hx61jyyjbnytbjcmyx0o6kmyg0kumytwdv95at1ekx3h6b3fuwjutxsjn7auq60dtrtxeyt825egfi7wu6a7kpk9fjj793mlxyuovjn47q76o5mf6u5ojvuk107ye4wvy8egauumtkht00xw2dfp328ypbbqdhmv3oag1t0mlwbu5xfr9vu962nw4sh9j5prfxltiv735cyssvo3tliarzgy2aak7du9eez3vkrj0w2yz6tlws0a5zcmvaqx8r96haqe1qpi1xnzqkef65vu8a7g37h8zhgtyqoauglmk3zlyv3gprhutb9efzfyacbkjo4xmecu7k2n6lk5cb99awf5m6zbruv5jgjmovhp2x6oz2lg9lulceehptk005nn7yjpij2o07rbvgkplp9vwbt205y4fgriyzl5q4j8oaogjnf7lb464ujytj9ahfeff0282nu3isdcyjsfwzi0juqgnotaq2e29kfrttwxx2213fyy5xle2agseh4l1d6o623t9xxozljke7dzh0w9syo7nn5grhjp9dcot0kqehe9fax6ffgyyxr7cd9qdf9f0csu6g9v1ex24iaya1b4q88p7s5cujn08eqkkybvpw6wjgfsh5xax7heldmewcn8fhqtg93tadb1v3llfee6434pvpiwk1x5yb086i3nx0kqr1qkkp2gl7hi7zs1ujpzy3prymf2waw2qugnhzdx3l0bsr0aohnol3wvd10gonhwph9gfjppx32m7yh57iciv4z0zs7um0etmn1ezx6npcb2zt0sady4po3329o9hflr9wdt9574qq1m6qb5r9hy2fgvt0e0jeqxhz6f3lhmn8vi6skrcozcqce0wo7r5e21ctoodf1tda1ssbqs8idlr91y3xoi682q98h2q4gw0y5pxy0mpusnmd81f1nqwmhi1gj9ue4rsem5s791crsqzq5s7ra8xyjo61c3br6gds68gw3pcpa4r3cph4e97wsxme284yhgtat4asbbf8gq718tkrf4kx3824xtwbhlcgfq536s4zwkrzww329xois209dff1u51q9bgzw4o80vvc909bdooasgt2bcf83l8br94quo84i6x07gnlio4o0dxvjjvikd0kq5irfkto5a3r0zgzl28ca8e9dfi3e24zvymc3ixe4lxmnqawh94y09627fgwyj4l7d6h6lxfu5mmrm4vr4b59up1i3m5yh5j3zv84gmaair25yt70j6cqxcj550l76oprgg00lbmb87bwub94jimckk784mo1le7r7uge5m12ymqc94h2ukxaa6p1eaamyeu1mwp7knfedbovvukaeb1vjtv3skqag67y473k1y0lvfp3guptug8i1qzbaukydflrqrki2pajrrw4m4v43uzdi79uwtectozzjw6gq0k9j3gc7lczpsuxw1wg1h56ayvp21uqipsjsvuksqn2yrxklalf9kvx1pe7qtqogcrwnj41gpvlak631blan03d8cpk9cg6uriukvd2hi7coxwcjkb79hbh1y3ueggmam5tdfb5tbm73ior2qrtyjh9rgm2whdrc4i50pxhq113drtdljo07m83jqxspvmzei11tloo8p8mlrhr53ri8mvgp43lq22ghbigcvpuiirg41559gn6zz5sggpc6bq7x8lwn6ed4s7mxbwwr02jnvusf1zu5xhkma9h3gloo9fziybv4qf6ybpn5z4jnwd8rbw7odrqqn76z068lqn98t7zcddt11o1anfoyymi3mf4kgam6lc7234u78fd2qpdqud3jiv0x6629hgrpwve8tasokrvzp36d3joke6sdzswim0thnvkbnlknxk3g496vksok2b910r0cjlra3st8af2btqnqfvyd == \o\d\t\w\l\n\o\8\5\z\f\b\e\d\p\w\q\z\v\i\v\g\t\p\y\d\n\r\i\y\o\a\2\2\k\c\x\o\f\k\8\d\n\u\r\f\a\t\b\e\8\y\j\z\m\9\u\7\9\9\l\e\d\0\j\v\0\w\1\t\x\d\b\p\b\t\u\o\1\t\6\s\1\9\s\o\u\2\y\q\n\e\7\1\4\o\v\2\1\y\8\i\6\d\r\d\6\5\s\g\e\a\n\p\y\s\k\2\w\3\2\g\o\e\i\t\g\f\1\y\4\4\a\i\t\j\n\q\6\k\w\c\r\c\n\p\b\c\g\j\0\n\d\r\z\o\h\j\y\d\f\e\a\0\6\j\o\s\b\z\r\m\j\u\b\q\4\7\4\k\d\r\z\i\a\a\g\w\4\l\p\2\q\x\4\d\o\k\x\u\0\q\h\6\d\n\m\5\0\k\f\t\6\n\6\n\p\x\h\p\7\x\n\4\5\3\1\0\u\t\p\l\4\e\j\t\v\g\5\d\2\d\c\8\l\i\q\r\g\y\x\e\q\3\o\w\1\i\l\c\6\i\1\w\3\e\c\b\n\e\8\w\r\c\6\v\d\3\u\q\w\a\2\f\u\y\5\m\d\1\g\6\8\0\3\l\w\z\t\q\m\0\1\2\g\9\7\u\n\i\g\j\v\o\h\g\e\1\0\k\e\f\e\l\t\2\y\u\r\z\4\n\5\c\b\d\p\r\z\w\t\8\w\r\3\c\q\k\8\4\3\5\2\k\d\6\p\b\9\7\9\d\j\4\h\c\q\2\i\h\h\9\e\1\9\s\b\5\9\1\t\4\1\k\i\v\8\7\k\j\o\7\1\7\a\0\g\z\i\n\s\s\j\r\4\0\t\f\0\w\m\0\x\u\j\6\f\h\0\1\a\u\5\7\c\p\2\8\s\v\b\o\4\7\i\d\l\g\c\8\1\j\8\3\g\n\y\j\4\9\s\2\0\d\i\2\2\m\9\w\q\7\m\v\4\w\d\1\f\i\q\1\r\q\6\a\q\6\9\j\v\t\m\o\f\0\a\9\3\v\t\w\5\7\5\w\m\m\2\s\a\n\h\0\j\y\i\r\1\t\a\j\a\x\m\a\l\u\m\2\j\k\c\f\a\k\9\8\x\o\r\p\t\t\w\x\h\m\1\k\b\u\h\w\j\b\y\0\r\u\n\p\0\t\n\3\d\e\5\l\4\i\y\u\0\1\6\8\a\s\7\9\a\j\f\f\w\9\2\8\t\r\v\k\g\a\d\0\r\i\o\k\h\k\6\k\f\d\q\t\o\6\z\o\9\2\q\i\q\2\n\s\k\i\n\s\9\u\4\5\2\6\q\t\i\z\e\e\c\8\j\v\k\m\u\z\w\9\x\6\j\5\l\h\5\e\k\j\v\0\z\z\p\u\w\9\c\b\f\9\8\y\j\3\r\o\i\j\p\4\1\4\j\n\o\4\t\w\6\t\4\y\o\g\1\2\o\m\8\z\k\h\i\d\o\9\9\9\k\r\s\o\8\z\w\v\u\r\a\b\p\g\k\z\q\z\k\t\x\p\b\p\w\w\m\7\u\u\8\o\f\s\a\0\h\7\g\z\5\x\8\f\u\q\n\i\b\4\w\7\j\k\k\l\m\h\d\b\n\c\2\1\f\f\n\e\b\b\t\7\2\k\7\4\f\2\b\e\5\9\i\a\v\4\l\f\3\c\y\7\q\2\b\h\p\e\j\l\e\p\6\g\9\1\2\t\3\d\r\o\v\4\l\a\h\h\u\z\f\b\4\t\o\w\n\h\n\o\7\x\9\a\o\r\u\g\p\6\t\4\o\o\s\t\w\7\u\3\4\h\q\y\s\5\w\x\l\k\j\k\c\s\g\y\g\s\r\j\x\1\c\0\t\w\r\5\v\s\t\0\f\w\4\y\9\c\f\5\h\4\h\q\j\n\r\4\p\5\m\7\c\3\q\6\c\o\j\b\a\g\l\p\u\w\8\l\y\2\5\2\r\u\9\9\b\n\7\j\a\k\x\6\5\o\v\b\e\8\e\o\2\x\t\c\u\n\f\y\e\t\1\a\f\v\5\r\o\p\2\3\9\j\2\9\9\6\4\a\x\1\b\g\w\t\k\u\y\o\4\8\h\a\l\q\y\f\s\a\t\4\u\8\2\8\a\y\w\d\n\5\5\g\e\v\b\4\u\5\u\v\c\f\b\6\2\i\v\n\t\k\2\y\e\q\x\7\1\i\2\s\x\j\2\6\h\u\t\u\h\l\5\2\e\6\7\2\v\4\f\o\h\p\r\5\u\n\a\7\2\g\u\q\w\e\t\p\e\g\7\g\8\n\w\b\a\t\x\c\k\z\t\d\4\z\v\2\8\8\r\j\d\9\1\8\e\w\u\7\4\d\j\z\e\g\8\e\0\r\c\x\4\p\x\7\i\z\k\p\1\v\6\f\s\6\f\l\p\5\k\f\o\p\9\r\u\3\j\s\8\a\p\r\k\k\8\s\j\h\3\i\z\s\6\e\z\j\9\p\6\3\6\q\e\3\o\u\3\5\r\4\f\6\3\z\6\5\z\8\o\z\e\g\i\p\y\q\z\s\3\n\3\u\e\k\o\o\8\d\n\g\7\r\q\t\0\g\v\9\g\k\y\a\5\2\s\a\z\2\l\0\i\3\x\v\7\t\v\e\b\k\l\n\3\d\2\3\6\f\3\4\3\3\h\z\a\4\4\5\8\s\8\g\s\7\3\f\6\t\8\k\f\h\a\a\z\d\z\n\y\u\w\9\3\e\o\q\u\n\h\i\a\0\7\7\v\4\c\v\2\b\u\d\i\l\l\d\x\w\o\3\3\6\6\f\h\v\n\6\2\j\y\i\a\h\c\5\k\o\k\f\s\2\9\0\v\4\g\p\s\w\q\m\1\f\z\k\x\3\e\7\o\x\k\c\8\4\2\b\c\u\x\w\u\x\r\m\5\t\5\5\l\j\g\u\0\g\n\y\q\s\a\q\6\o\f\m\r\t\x\k\8\7\9\t\q\b\t\8\4\n\a\h\x\1\8\r\o\j\7\f\d\x\b\v\b\9\t\f\7\u\d\o\j\e\j\g\d\a\i\8\c\h\g\0\s\f\9\m\b\q\7\q\c\y\5\5\q\j\e\w\t\d\k\9\f\f\s\2\u\q\2\r\x\q\s\t\p\d\d\h\3\c\0\9\6\e\e\r\o\u\7\n\1\x\d\t\u\i\r\h\2\2\v\5\7\e\x\j\m\c\0\f\v\y\u\m\4\7\3\d\g\5\m\5\b\y\9\6\f\k\f\l\h\o\8\b\k\i\m\i\p\1\4\b\b\7\2\2\y\l\r\e\o\6\l\p\r\z\n\r\w\d\r\k\9\9\b\c\m\q\2\8\p\m\c\i\r\h\q\o\y\6\0\9\l\s\2\n\g\s\9\u\l\1\p\w\w\p\c\s\w\a\z\c\t\0\2\h\8\6\f\o\u\s\u\v\9\7\i\d\f\e\r\e\3\p\0\3\6\r\x\c\f\a\8\d\r\8\a\3\4\4\e\p\g\k\t\l\q\d\k\u\8\h\x\t\b\t\t\y\e\y\r\n\f\q\v\p\c\a\a\i\3\j\b\f\8\q\3\t\s\q\6\o\y\7\3\y\5\s\q\4\x\c\p\b\1\z\6\i\s\l\2\j\8\f\p\4\e\c\z\z\6\y\9\o\e\f\q\p\l\6\b\d\m\k\4\1\g\h\s\b\a\2\n\0\h\6\g\t\m\i\g\p\9\a\l\s\j\u\k\z\s\x\1\p\p\e\c\b\q\7\g\i\4\c\1\4\x\e\c\a\x\7\z\a\b\a\p\v\w\0\u\0\q\n\w\y\7\k\c\i\o\6\9\4\z\2\b\a\8\q\a\n\h\2\c\1\x\i\6\q\0\j\s\c\u\r\b\v\k\4\l\4\8\j\i\a\a\y\i\e\b\k\c\4\p\e\o\l\w\l\q\z\c\6\p\t\u\j\t\0\k\s\3\h\y\r\y\x\1\c\u\5\o\x\m\x\z\z\f\r\i\0\y\w\n\p\1\4\2\2\o\q\6\8\h\z\c\9\9\0\h\g\3\j\2\x\5\6\9\l\w\6\h\k\d\9\z\y\l\n\f\4\5\v\s\u\5\s\8\v\j\c\h\o\i\x\e\n\a\4\k\o\w\s\r\4\x\o\w\y\t\8\w\u\i\i\r\5\z\n\p\g\l\t\w\s\t\f\q\f\o\z\p\v\n\y\n\0\p\7\m\8\6\b\l\e\w\7\8\f\x\i\c\4\5\u\5\a\z\n\o\p\7\p\d\5\i\f\p\r\o\h\y\f\s\5\i\k\r\s\r\g\9\p\t\q\5\6\o\3\j\i\q\s\f\a\9\r\j\d\x\v\k\p\4\w\n\o\z\o\s\h\0\7\s\h\o\7\w\t\2\p\g\n\k\n\m\j\4\m\3\7\n\o\n\9\t\u\v\1\x\q\j\m\w\f\v\u\0\2\i\q\d\b\l\b\v\d\1\e\e\j\3\5\j\n\p\n\1\k\o\y\9\k\g\u\r\c\p\c\u\y\4\a\2\8\d\e\d\j\g\f\h\f\o\j\4\t\k\0\5\f\i\q\s\o\a\9\k\s\c\h\v\j\h\i\b\i\5\o\m\e\2\1\7\m\q\u\6\p\y\w\d\c\3\g\v\x\b\d\b\h\d\8\v\d\l\a\1\i\p\2\h\2\w\e\8\3\8\x\e\q\j\r\z\s\1\1\h\x\6\1\j\y\y\j\b\n\y\t\b\j\c\m\y\x\0\o\6\k\m\y\g\0\k\u\m\y\t\w\d\v\9\5\a\t\1\e\k\x\3\h\6\b\3\f\u\w\j\u\t\x\s\j\n\7\a\u\q\6\0\d\t\r\t\x\e\y\t\8\2\5\e\g\f\i\7\w\u\6\a\7\k\p\k\9\f\j\j\7\9\3\m\l\x\y\u\o\v\j\n\4\7\q\7\6\o\5\m\f\6\u\5\o\j\v\u\k\1\0\7\y\e\4\w\v\y\8\e\g\a\u\u\m\t\k\h\t\0\0\x\w\2\d\f\p\3\2\8\y\p\b\b\q\d\h\m\v\3\o\a\g\1\t\0\m\l\w\b\u\5\x\f\r\9\v\u\9\6\2\n\w\4\s\h\9\j\5\p\r\f\x\l\t\i\v\7\3\5\c\y\s\s\v\o\3\t\l\i\a\r\z\g\y\2\a\a\k\7\d\u\9\e\e\z\3\v\k\r\j\0\w\2\y\z\6\t\l\w\s\0\a\5\z\c\m\v\a\q\x\8\r\9\6\h\a\q\e\1\q\p\i\1\x\n\z\q\k\e\f\6\5\v\u\8\a\7\g\3\7\h\8\z\h\g\t\y\q\o\a\u\g\l\m\k\3\z\l\y\v\3\g\p\r\h\u\t\b\9\e\f\z\f\y\a\c\b\k\j\o\4\x\m\e\c\u\7\k\2\n\6\l\k\5\c\b\9\9\a\w\f\5\m\6\z\b\r\u\v\5\j\g\j\m\o\v\h\p\2\x\6\o\z\2\l\g\9\l\u\l\c\e\e\h\p\t\k\0\0\5\n\n\7\y\j\p\i\j\2\o\0\7\r\b\v\g\k\p\l\p\9\v\w\b\t\2\0\5\y\4\f\g\r\i\y\z\l\5\q\4\j\8\o\a\o\g\j\n\f\7\l\b\4\6\4\u\j\y\t\j\9\a\h\f\e\f\f\0\2\8\2\n\u\3\i\s\d\c\y\j\s\f\w\z\i\0\j\u\q\g\n\o\t\a\q\2\e\2\9\k\f\r\t\t\w\x\x\2\2\1\3\f\y\y\5\x\l\e\2\a\g\s\e\h\4\l\1\d\6\o\6\2\3\t\9\x\x\o\z\l\j\k\e\7\d\z\h\0\w\9\s\y\o\7\n\n\5\g\r\h\j\p\9\d\c\o\t\0\k\q\e\h\e\9\f\a\x\6\f\f\g\y\y\x\r\7\c\d\9\q\d\f\9\f\0\c\s\u\6\g\9\v\1\e\x\2\4\i\a\y\a\1\b\4\q\8\8\p\7\s\5\c\u\j\n\0\8\e\q\k\k\y\b\v\p\w\6\w\j\g\f\s\h\5\x\a\x\7\h\e\l\d\m\e\w\c\n\8\f\h\q\t\g\9\3\t\a\d\b\1\v\3\l\l\f\e\e\6\4\3\4\p\v\p\i\w\k\1\x\5\y\b\0\8\6\i\3\n\x\0\k\q\r\1\q\k\k\p\2\g\l\7\h\i\7\z\s\1\u\j\p\z\y\3\p\r\y\m\f\2\w\a\w\2\q\u\g\n\h\z\d\x\3\l\0\b\s\r\0\a\o\h\n\o\l\3\w\v\d\1\0\g\o\n\h\w\p\h\9\g\f\j\p\p\x\3\2\m\7\y\h\5\7\i\c\i\v\4\z\0\z\s\7\u\m\0\e\t\m\n\1\e\z\x\6\n\p\c\b\2\z\t\0\s\a\d\y\4\p\o\3\3\2\9\o\9\h\f\l\r\9\w\d\t\9\5\7\4\q\q\1\m\6\q\b\5\r\9\h\y\2\f\g\v\t\0\e\0\j\e\q\x\h\z\6\f\3\l\h\m\n\8\v\i\6\s\k\r\c\o\z\c\q\c\e\0\w\o\7\r\5\e\2\1\c\t\o\o\d\f\1\t\d\a\1\s\s\b\q\s\8\i\d\l\r\9\1\y\3\x\o\i\6\8\2\q\9\8\h\2\q\4\g\w\0\y\5\p\x\y\0\m\p\u\s\n\m\d\8\1\f\1\n\q\w\m\h\i\1\g\j\9\u\e\4\r\s\e\m\5\s\7\9\1\c\r\s\q\z\q\5\s\7\r\a\8\x\y\j\o\6\1\c\3\b\r\6\g\d\s\6\8\g\w\3\p\c\p\a\4\r\3\c\p\h\4\e\9\7\w\s\x\m\e\2\8\4\y\h\g\t\a\t\4\a\s\b\b\f\8\g\q\7\1\8\t\k\r\f\4\k\x\3\8\2\4\x\t\w\b\h\l\c\g\f\q\5\3\6\s\4\z\w\k\r\z\w\w\3\2\9\x\o\i\s\2\0\9\d\f\f\1\u\5\1\q\9\b\g\z\w\4\o\8\0\v\v\c\9\0\9\b\d\o\o\a\s\g\t\2\b\c\f\8\3\l\8\b\r\9\4\q\u\o\8\4\i\6\x\0\7\g\n\l\i\o\4\o\0\d\x\v\j\j\v\i\k\d\0\k\q\5\i\r\f\k\t\o\5\a\3\r\0\z\g\z\l\2\8\c\a\8\e\9\d\f\i\3\e\2\4\z\v\y\m\c\3\i\x\e\4\l\x\m\n\q\a\w\h\9\4\y\0\9\6\2\7\f\g\w\y\j\4\l\7\d\6\h\6\l\x\f\u\5\m\m\r\m\4\v\r\4\b\5\9\u\p\1\i\3\m\5\y\h\5\j\3\z\v\8\4\g\m\a\a\i\r\2\5\y\t\7\0\j\6\c\q\x\c\j\5\5\0\l\7\6\o\p\r\g\g\0\0\l\b\m\b\8\7\b\w\u\b\9\4\j\i\m\c\k\k\7\8\4\m\o\1\l\e\7\r\7\u\g\e\5\m\1\2\y\m\q\c\9\4\h\2\u\k\x\a\a\6\p\1\e\a\a\m\y\e\u\1\m\w\p\7\k\n\f\e\d\b\o\v\v\u\k\a\e\b\1\v\j\t\v\3\s\k\q\a\g\6\7\y\4\7\3\k\1\y\0\l\v\f\p\3\g\u\p\t\u\g\8\i\1\q\z\b\a\u\k\y\d\f\l\r\q\r\k\i\2\p\a\j\r\r\w\4\m\4\v\4\3\u\z\d\i\7\9\u\w\t\e\c\t\o\z\z\j\w\6\g\q\0\k\9\j\3\g\c\7\l\c\z\p\s\u\x\w\1\w\g\1\h\5\6\a\y\v\p\2\1\u\q\i\p\s\j\s\v\u\k\s\q\n\2\y\r\x\k\l\a\l\f\9\k\v\x\1\p\e\7\q\t\q\o\g\c\r\w\n\j\4\1\g\p\v\l\a\k\6\3\1\b\l\a\n\0\3\d\8\c\p\k\9\c\g\6\u\r\i\u\k\v\d\2\h\i\7\c\o\x\w\c\j\k\b\7\9\h\b\h\1\y\3\u\e\g\g\m\a\m\5\t\d\f\b\5\t\b\m\7\3\i\o\r\2\q\r\t\y\j\h\9\r\g\m\2\w\h\d\r\c\4\i\5\0\p\x\h\q\1\1\3\d\r\t\d\l\j\o\0\7\m\8\3\j\q\x\s\p\v\m\z\e\i\1\1\t\l\o\o\8\p\8\m\l\r\h\r\5\3\r\i\8\m\v\g\p\4\3\l\q\2\2\g\h\b\i\g\c\v\p\u\i\i\r\g\4\1\5\5\9\g\n\6\z\z\5\s\g\g\p\c\6\b\q\7\x\8\l\w\n\6\e\d\4\s\7\m\x\b\w\w\r\0\2\j\n\v\u\s\f\1\z\u\5\x\h\k\m\a\9\h\3\g\l\o\o\9\f\z\i\y\b\v\4\q\f\6\y\b\p\n\5\z\4\j\n\w\d\8\r\b\w\7\o\d\r\q\q\n\7\6\z\0\6\8\l\q\n\9\8\t\7\z\c\d\d\t\1\1\o\1\a\n\f\o\y\y\m\i\3\m\f\4\k\g\a\m\6\l\c\7\2\3\4\u\7\8\f\d\2\q\p\d\q\u\d\3\j\i\v\0\x\6\6\2\9\h\g\r\p\w\v\e\8\t\a\s\o\k\r\v\z\p\3\6\d\3\j\o\k\e\6\s\d\z\s\w\i\m\0\t\h\n\v\k\b\n\l\k\n\x\k\3\g\4\9\6\v\k\s\o\k\2\b\9\1\0\r\0\c\j\l\r\a\3\s\t\8\a\f\2\b\t\q\n\q\f\v\y\d ]] 00:06:15.498 00:06:15.498 real 0m1.227s 00:06:15.498 user 0m0.852s 00:06:15.498 sys 0m0.497s 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.498 20:42:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.758 [2024-07-15 20:42:37.437701] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:15.758 [2024-07-15 20:42:37.437768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62919 ] 00:06:15.758 { 00:06:15.758 "subsystems": [ 00:06:15.758 { 00:06:15.758 "subsystem": "bdev", 00:06:15.758 "config": [ 00:06:15.758 { 00:06:15.758 "params": { 00:06:15.758 "trtype": "pcie", 00:06:15.758 "traddr": "0000:00:10.0", 00:06:15.758 "name": "Nvme0" 00:06:15.758 }, 00:06:15.758 "method": "bdev_nvme_attach_controller" 00:06:15.758 }, 00:06:15.758 { 00:06:15.758 "method": "bdev_wait_for_examine" 00:06:15.758 } 00:06:15.758 ] 00:06:15.758 } 00:06:15.758 ] 00:06:15.758 } 00:06:15.758 [2024-07-15 20:42:37.578337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.016 [2024-07-15 20:42:37.671847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.016 [2024-07-15 20:42:37.713556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.273  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:16.273 00:06:16.273 20:42:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.273 00:06:16.273 real 0m16.422s 00:06:16.273 user 0m11.468s 00:06:16.273 sys 0m6.008s 00:06:16.273 20:42:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.273 ************************************ 00:06:16.273 END TEST spdk_dd_basic_rw 00:06:16.273 ************************************ 00:06:16.273 20:42:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.273 20:42:38 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:16.273 20:42:38 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:16.273 20:42:38 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.273 20:42:38 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.273 20:42:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:16.273 ************************************ 00:06:16.273 START TEST spdk_dd_posix 00:06:16.273 ************************************ 00:06:16.273 20:42:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:16.273 * Looking for test storage... 00:06:16.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:16.531 * First test run, liburing in use 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:16.531 ************************************ 00:06:16.531 START TEST dd_flag_append 00:06:16.531 ************************************ 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=6aank8oqlbgb52eolhw1tvhuzvt9fb1l 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=acgk5tzna2hg9dxqzth7dl6ngcjq1zsn 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 6aank8oqlbgb52eolhw1tvhuzvt9fb1l 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s acgk5tzna2hg9dxqzth7dl6ngcjq1zsn 00:06:16.531 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:16.531 [2024-07-15 20:42:38.275825] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:16.531 [2024-07-15 20:42:38.275893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62977 ] 00:06:16.531 [2024-07-15 20:42:38.415161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.789 [2024-07-15 20:42:38.490154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.789 [2024-07-15 20:42:38.530788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.047  Copying: 32/32 [B] (average 31 kBps) 00:06:17.047 00:06:17.047 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ acgk5tzna2hg9dxqzth7dl6ngcjq1zsn6aank8oqlbgb52eolhw1tvhuzvt9fb1l == \a\c\g\k\5\t\z\n\a\2\h\g\9\d\x\q\z\t\h\7\d\l\6\n\g\c\j\q\1\z\s\n\6\a\a\n\k\8\o\q\l\b\g\b\5\2\e\o\l\h\w\1\t\v\h\u\z\v\t\9\f\b\1\l ]] 00:06:17.047 00:06:17.047 real 0m0.515s 00:06:17.047 user 0m0.282s 00:06:17.047 sys 0m0.227s 00:06:17.047 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.047 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:17.047 ************************************ 00:06:17.047 END TEST dd_flag_append 00:06:17.047 ************************************ 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:17.048 ************************************ 00:06:17.048 START TEST dd_flag_directory 00:06:17.048 ************************************ 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:17.048 20:42:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:17.048 [2024-07-15 20:42:38.857436] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:17.048 [2024-07-15 20:42:38.857508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63006 ] 00:06:17.305 [2024-07-15 20:42:38.998595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.305 [2024-07-15 20:42:39.087008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.305 [2024-07-15 20:42:39.127826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.305 [2024-07-15 20:42:39.153807] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:17.305 [2024-07-15 20:42:39.153854] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:17.305 [2024-07-15 20:42:39.153866] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.563 [2024-07-15 20:42:39.243003] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:17.563 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:17.563 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.563 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:17.563 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:17.563 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:17.563 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:17.564 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:17.564 [2024-07-15 20:42:39.377411] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:17.564 [2024-07-15 20:42:39.377507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63016 ] 00:06:17.821 [2024-07-15 20:42:39.526513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.821 [2024-07-15 20:42:39.606343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.821 [2024-07-15 20:42:39.646989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.821 [2024-07-15 20:42:39.672644] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:17.821 [2024-07-15 20:42:39.672691] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:17.821 [2024-07-15 20:42:39.672703] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.079 [2024-07-15 20:42:39.761845] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.079 00:06:18.079 real 0m1.047s 00:06:18.079 user 0m0.589s 00:06:18.079 sys 0m0.249s 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:18.079 ************************************ 00:06:18.079 END TEST dd_flag_directory 00:06:18.079 ************************************ 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.079 ************************************ 00:06:18.079 START TEST dd_flag_nofollow 00:06:18.079 ************************************ 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.079 20:42:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.080 [2024-07-15 20:42:39.974288] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:18.080 [2024-07-15 20:42:39.974395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63044 ] 00:06:18.337 [2024-07-15 20:42:40.124270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.337 [2024-07-15 20:42:40.212159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.594 [2024-07-15 20:42:40.252908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.594 [2024-07-15 20:42:40.279500] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:18.594 [2024-07-15 20:42:40.279547] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:18.594 [2024-07-15 20:42:40.279560] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.594 [2024-07-15 20:42:40.369034] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.594 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:18.851 [2024-07-15 20:42:40.506051] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:18.851 [2024-07-15 20:42:40.506120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63059 ] 00:06:18.851 [2024-07-15 20:42:40.644614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.851 [2024-07-15 20:42:40.726795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.109 [2024-07-15 20:42:40.767532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.109 [2024-07-15 20:42:40.793242] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:19.109 [2024-07-15 20:42:40.793287] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:19.109 [2024-07-15 20:42:40.793300] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.109 [2024-07-15 20:42:40.882302] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:19.109 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:19.109 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.109 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:19.109 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:19.110 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:19.110 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.110 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:19.110 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:19.110 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:19.110 20:42:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.368 [2024-07-15 20:42:41.021736] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:19.368 [2024-07-15 20:42:41.021803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63061 ] 00:06:19.368 [2024-07-15 20:42:41.160115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.368 [2024-07-15 20:42:41.246984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.625 [2024-07-15 20:42:41.287758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.625  Copying: 512/512 [B] (average 500 kBps) 00:06:19.625 00:06:19.625 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ glzj105h2dw0wdv91h95bnltiy1gq7ilkykkoi93teclnxq2p93a5z73s2c6gtk8dimek426sky9jn7c89vjaz0co0lh098yatk36quim024sxrvnldq6ztgqys69nuhcgte5msb4fucomfruyriiohh9u9h1et4a0rqkk9hjpgj5irmyio45e592pd68uxv6gj3w4jqm7gdhxtgaijclvxh9figutea7x68w3rzi9hfppiio1ngr4jnac7sbhqxeyz92cnzcpthbf0i39v13xevl6gnx0ktyf8cl4o3gixj0lknvlxnn6lvrfxydnesx5edq25d1386it0cr3bk0bulo9jypb71ua8rechzwmu6h7vq5lqwub2owfcv52gx3tw26qusafny8vtykce0qvg92dpl2t8qir8jdpc9vkisboa93o2j0u8er43g9y3n14mkl12cnjt5v9pp8whobltclcca4lhclxf98pyr75lx2wv2hll1yi1n7clivexd == \g\l\z\j\1\0\5\h\2\d\w\0\w\d\v\9\1\h\9\5\b\n\l\t\i\y\1\g\q\7\i\l\k\y\k\k\o\i\9\3\t\e\c\l\n\x\q\2\p\9\3\a\5\z\7\3\s\2\c\6\g\t\k\8\d\i\m\e\k\4\2\6\s\k\y\9\j\n\7\c\8\9\v\j\a\z\0\c\o\0\l\h\0\9\8\y\a\t\k\3\6\q\u\i\m\0\2\4\s\x\r\v\n\l\d\q\6\z\t\g\q\y\s\6\9\n\u\h\c\g\t\e\5\m\s\b\4\f\u\c\o\m\f\r\u\y\r\i\i\o\h\h\9\u\9\h\1\e\t\4\a\0\r\q\k\k\9\h\j\p\g\j\5\i\r\m\y\i\o\4\5\e\5\9\2\p\d\6\8\u\x\v\6\g\j\3\w\4\j\q\m\7\g\d\h\x\t\g\a\i\j\c\l\v\x\h\9\f\i\g\u\t\e\a\7\x\6\8\w\3\r\z\i\9\h\f\p\p\i\i\o\1\n\g\r\4\j\n\a\c\7\s\b\h\q\x\e\y\z\9\2\c\n\z\c\p\t\h\b\f\0\i\3\9\v\1\3\x\e\v\l\6\g\n\x\0\k\t\y\f\8\c\l\4\o\3\g\i\x\j\0\l\k\n\v\l\x\n\n\6\l\v\r\f\x\y\d\n\e\s\x\5\e\d\q\2\5\d\1\3\8\6\i\t\0\c\r\3\b\k\0\b\u\l\o\9\j\y\p\b\7\1\u\a\8\r\e\c\h\z\w\m\u\6\h\7\v\q\5\l\q\w\u\b\2\o\w\f\c\v\5\2\g\x\3\t\w\2\6\q\u\s\a\f\n\y\8\v\t\y\k\c\e\0\q\v\g\9\2\d\p\l\2\t\8\q\i\r\8\j\d\p\c\9\v\k\i\s\b\o\a\9\3\o\2\j\0\u\8\e\r\4\3\g\9\y\3\n\1\4\m\k\l\1\2\c\n\j\t\5\v\9\p\p\8\w\h\o\b\l\t\c\l\c\c\a\4\l\h\c\l\x\f\9\8\p\y\r\7\5\l\x\2\w\v\2\h\l\l\1\y\i\1\n\7\c\l\i\v\e\x\d ]] 00:06:19.625 00:06:19.625 real 0m1.578s 00:06:19.625 user 0m0.877s 00:06:19.625 sys 0m0.487s 00:06:19.625 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.625 ************************************ 00:06:19.625 END TEST dd_flag_nofollow 00:06:19.625 ************************************ 00:06:19.625 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:19.882 20:42:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:19.882 20:42:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:19.882 20:42:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.882 20:42:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.882 20:42:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:19.882 ************************************ 00:06:19.882 START TEST dd_flag_noatime 00:06:19.882 ************************************ 00:06:19.882 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:06:19.882 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:19.883 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:19.883 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:19.883 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:19.883 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:19.883 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.883 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721076161 00:06:19.883 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.883 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721076161 00:06:19.883 20:42:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:20.816 20:42:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.816 [2024-07-15 20:42:42.642395] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:20.816 [2024-07-15 20:42:42.642462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63109 ] 00:06:21.075 [2024-07-15 20:42:42.782146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.075 [2024-07-15 20:42:42.865621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.075 [2024-07-15 20:42:42.906422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.334  Copying: 512/512 [B] (average 500 kBps) 00:06:21.334 00:06:21.334 20:42:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.334 20:42:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721076161 )) 00:06:21.334 20:42:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.334 20:42:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721076161 )) 00:06:21.334 20:42:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.334 [2024-07-15 20:42:43.169818] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:21.334 [2024-07-15 20:42:43.169889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63117 ] 00:06:21.593 [2024-07-15 20:42:43.310081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.593 [2024-07-15 20:42:43.395431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.593 [2024-07-15 20:42:43.436137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.850  Copying: 512/512 [B] (average 500 kBps) 00:06:21.850 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.850 ************************************ 00:06:21.850 END TEST dd_flag_noatime 00:06:21.850 ************************************ 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721076163 )) 00:06:21.850 00:06:21.850 real 0m2.084s 00:06:21.850 user 0m0.581s 00:06:21.850 sys 0m0.490s 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:21.850 ************************************ 00:06:21.850 START TEST dd_flags_misc 00:06:21.850 ************************************ 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:21.850 20:42:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:22.108 [2024-07-15 20:42:43.770412] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:22.108 [2024-07-15 20:42:43.770476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63151 ] 00:06:22.108 [2024-07-15 20:42:43.900343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.108 [2024-07-15 20:42:43.985725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.365 [2024-07-15 20:42:44.026470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.365  Copying: 512/512 [B] (average 500 kBps) 00:06:22.365 00:06:22.365 20:42:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l6x9ie6o8js3wnx7b9p8dzkcep2ahil48tpw5fs0f2ilin7dvq99kpsgjpmvb4hn0pp572qeiy7y9egddze84n1bvvqr0txrfpwidrqblv1amuoyyv2dzpdhrvz3diwj8ttjw46jpscdyev9epna3d2x792pv2kcn2ingr6ff7brj7bp2y9qigfnlaln3v1ezpmndif9fkntdxmzbxthce8xwqmixpqkthmf25rjy5jhy0jo9q11oojeroz1l7r3w0ap61qrw3sk70no4tdlcjq4yvfak7vsr6ohwuppbote8fiod73qhvbd9nsu4ro6a8dkpmp4q9gvsj2jsqtpzr7ltdilq4zp3w0shrq125ixell5zhds3jd3g42kpznfbr5omtlr9hpwpi6hhl0ugan1gvr4vhf0c1xm3r1uop9me3pgpvyxdm7kz9jx4pcfdvqhgy1xmnsb5ms6sy9huito4ojcpp9tejn8e7giame7t82to6y0sqeg0zbbfa9x == \l\6\x\9\i\e\6\o\8\j\s\3\w\n\x\7\b\9\p\8\d\z\k\c\e\p\2\a\h\i\l\4\8\t\p\w\5\f\s\0\f\2\i\l\i\n\7\d\v\q\9\9\k\p\s\g\j\p\m\v\b\4\h\n\0\p\p\5\7\2\q\e\i\y\7\y\9\e\g\d\d\z\e\8\4\n\1\b\v\v\q\r\0\t\x\r\f\p\w\i\d\r\q\b\l\v\1\a\m\u\o\y\y\v\2\d\z\p\d\h\r\v\z\3\d\i\w\j\8\t\t\j\w\4\6\j\p\s\c\d\y\e\v\9\e\p\n\a\3\d\2\x\7\9\2\p\v\2\k\c\n\2\i\n\g\r\6\f\f\7\b\r\j\7\b\p\2\y\9\q\i\g\f\n\l\a\l\n\3\v\1\e\z\p\m\n\d\i\f\9\f\k\n\t\d\x\m\z\b\x\t\h\c\e\8\x\w\q\m\i\x\p\q\k\t\h\m\f\2\5\r\j\y\5\j\h\y\0\j\o\9\q\1\1\o\o\j\e\r\o\z\1\l\7\r\3\w\0\a\p\6\1\q\r\w\3\s\k\7\0\n\o\4\t\d\l\c\j\q\4\y\v\f\a\k\7\v\s\r\6\o\h\w\u\p\p\b\o\t\e\8\f\i\o\d\7\3\q\h\v\b\d\9\n\s\u\4\r\o\6\a\8\d\k\p\m\p\4\q\9\g\v\s\j\2\j\s\q\t\p\z\r\7\l\t\d\i\l\q\4\z\p\3\w\0\s\h\r\q\1\2\5\i\x\e\l\l\5\z\h\d\s\3\j\d\3\g\4\2\k\p\z\n\f\b\r\5\o\m\t\l\r\9\h\p\w\p\i\6\h\h\l\0\u\g\a\n\1\g\v\r\4\v\h\f\0\c\1\x\m\3\r\1\u\o\p\9\m\e\3\p\g\p\v\y\x\d\m\7\k\z\9\j\x\4\p\c\f\d\v\q\h\g\y\1\x\m\n\s\b\5\m\s\6\s\y\9\h\u\i\t\o\4\o\j\c\p\p\9\t\e\j\n\8\e\7\g\i\a\m\e\7\t\8\2\t\o\6\y\0\s\q\e\g\0\z\b\b\f\a\9\x ]] 00:06:22.365 20:42:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.365 20:42:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:22.365 [2024-07-15 20:42:44.268287] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:22.365 [2024-07-15 20:42:44.268351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63155 ] 00:06:22.622 [2024-07-15 20:42:44.407155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.622 [2024-07-15 20:42:44.497294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.879 [2024-07-15 20:42:44.538134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.879  Copying: 512/512 [B] (average 500 kBps) 00:06:22.879 00:06:22.879 20:42:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l6x9ie6o8js3wnx7b9p8dzkcep2ahil48tpw5fs0f2ilin7dvq99kpsgjpmvb4hn0pp572qeiy7y9egddze84n1bvvqr0txrfpwidrqblv1amuoyyv2dzpdhrvz3diwj8ttjw46jpscdyev9epna3d2x792pv2kcn2ingr6ff7brj7bp2y9qigfnlaln3v1ezpmndif9fkntdxmzbxthce8xwqmixpqkthmf25rjy5jhy0jo9q11oojeroz1l7r3w0ap61qrw3sk70no4tdlcjq4yvfak7vsr6ohwuppbote8fiod73qhvbd9nsu4ro6a8dkpmp4q9gvsj2jsqtpzr7ltdilq4zp3w0shrq125ixell5zhds3jd3g42kpznfbr5omtlr9hpwpi6hhl0ugan1gvr4vhf0c1xm3r1uop9me3pgpvyxdm7kz9jx4pcfdvqhgy1xmnsb5ms6sy9huito4ojcpp9tejn8e7giame7t82to6y0sqeg0zbbfa9x == \l\6\x\9\i\e\6\o\8\j\s\3\w\n\x\7\b\9\p\8\d\z\k\c\e\p\2\a\h\i\l\4\8\t\p\w\5\f\s\0\f\2\i\l\i\n\7\d\v\q\9\9\k\p\s\g\j\p\m\v\b\4\h\n\0\p\p\5\7\2\q\e\i\y\7\y\9\e\g\d\d\z\e\8\4\n\1\b\v\v\q\r\0\t\x\r\f\p\w\i\d\r\q\b\l\v\1\a\m\u\o\y\y\v\2\d\z\p\d\h\r\v\z\3\d\i\w\j\8\t\t\j\w\4\6\j\p\s\c\d\y\e\v\9\e\p\n\a\3\d\2\x\7\9\2\p\v\2\k\c\n\2\i\n\g\r\6\f\f\7\b\r\j\7\b\p\2\y\9\q\i\g\f\n\l\a\l\n\3\v\1\e\z\p\m\n\d\i\f\9\f\k\n\t\d\x\m\z\b\x\t\h\c\e\8\x\w\q\m\i\x\p\q\k\t\h\m\f\2\5\r\j\y\5\j\h\y\0\j\o\9\q\1\1\o\o\j\e\r\o\z\1\l\7\r\3\w\0\a\p\6\1\q\r\w\3\s\k\7\0\n\o\4\t\d\l\c\j\q\4\y\v\f\a\k\7\v\s\r\6\o\h\w\u\p\p\b\o\t\e\8\f\i\o\d\7\3\q\h\v\b\d\9\n\s\u\4\r\o\6\a\8\d\k\p\m\p\4\q\9\g\v\s\j\2\j\s\q\t\p\z\r\7\l\t\d\i\l\q\4\z\p\3\w\0\s\h\r\q\1\2\5\i\x\e\l\l\5\z\h\d\s\3\j\d\3\g\4\2\k\p\z\n\f\b\r\5\o\m\t\l\r\9\h\p\w\p\i\6\h\h\l\0\u\g\a\n\1\g\v\r\4\v\h\f\0\c\1\x\m\3\r\1\u\o\p\9\m\e\3\p\g\p\v\y\x\d\m\7\k\z\9\j\x\4\p\c\f\d\v\q\h\g\y\1\x\m\n\s\b\5\m\s\6\s\y\9\h\u\i\t\o\4\o\j\c\p\p\9\t\e\j\n\8\e\7\g\i\a\m\e\7\t\8\2\t\o\6\y\0\s\q\e\g\0\z\b\b\f\a\9\x ]] 00:06:22.879 20:42:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:22.879 20:42:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:22.879 [2024-07-15 20:42:44.775132] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:22.879 [2024-07-15 20:42:44.775221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63170 ] 00:06:23.137 [2024-07-15 20:42:44.914255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.137 [2024-07-15 20:42:44.995331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.137 [2024-07-15 20:42:45.036087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.395  Copying: 512/512 [B] (average 250 kBps) 00:06:23.395 00:06:23.395 20:42:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l6x9ie6o8js3wnx7b9p8dzkcep2ahil48tpw5fs0f2ilin7dvq99kpsgjpmvb4hn0pp572qeiy7y9egddze84n1bvvqr0txrfpwidrqblv1amuoyyv2dzpdhrvz3diwj8ttjw46jpscdyev9epna3d2x792pv2kcn2ingr6ff7brj7bp2y9qigfnlaln3v1ezpmndif9fkntdxmzbxthce8xwqmixpqkthmf25rjy5jhy0jo9q11oojeroz1l7r3w0ap61qrw3sk70no4tdlcjq4yvfak7vsr6ohwuppbote8fiod73qhvbd9nsu4ro6a8dkpmp4q9gvsj2jsqtpzr7ltdilq4zp3w0shrq125ixell5zhds3jd3g42kpznfbr5omtlr9hpwpi6hhl0ugan1gvr4vhf0c1xm3r1uop9me3pgpvyxdm7kz9jx4pcfdvqhgy1xmnsb5ms6sy9huito4ojcpp9tejn8e7giame7t82to6y0sqeg0zbbfa9x == \l\6\x\9\i\e\6\o\8\j\s\3\w\n\x\7\b\9\p\8\d\z\k\c\e\p\2\a\h\i\l\4\8\t\p\w\5\f\s\0\f\2\i\l\i\n\7\d\v\q\9\9\k\p\s\g\j\p\m\v\b\4\h\n\0\p\p\5\7\2\q\e\i\y\7\y\9\e\g\d\d\z\e\8\4\n\1\b\v\v\q\r\0\t\x\r\f\p\w\i\d\r\q\b\l\v\1\a\m\u\o\y\y\v\2\d\z\p\d\h\r\v\z\3\d\i\w\j\8\t\t\j\w\4\6\j\p\s\c\d\y\e\v\9\e\p\n\a\3\d\2\x\7\9\2\p\v\2\k\c\n\2\i\n\g\r\6\f\f\7\b\r\j\7\b\p\2\y\9\q\i\g\f\n\l\a\l\n\3\v\1\e\z\p\m\n\d\i\f\9\f\k\n\t\d\x\m\z\b\x\t\h\c\e\8\x\w\q\m\i\x\p\q\k\t\h\m\f\2\5\r\j\y\5\j\h\y\0\j\o\9\q\1\1\o\o\j\e\r\o\z\1\l\7\r\3\w\0\a\p\6\1\q\r\w\3\s\k\7\0\n\o\4\t\d\l\c\j\q\4\y\v\f\a\k\7\v\s\r\6\o\h\w\u\p\p\b\o\t\e\8\f\i\o\d\7\3\q\h\v\b\d\9\n\s\u\4\r\o\6\a\8\d\k\p\m\p\4\q\9\g\v\s\j\2\j\s\q\t\p\z\r\7\l\t\d\i\l\q\4\z\p\3\w\0\s\h\r\q\1\2\5\i\x\e\l\l\5\z\h\d\s\3\j\d\3\g\4\2\k\p\z\n\f\b\r\5\o\m\t\l\r\9\h\p\w\p\i\6\h\h\l\0\u\g\a\n\1\g\v\r\4\v\h\f\0\c\1\x\m\3\r\1\u\o\p\9\m\e\3\p\g\p\v\y\x\d\m\7\k\z\9\j\x\4\p\c\f\d\v\q\h\g\y\1\x\m\n\s\b\5\m\s\6\s\y\9\h\u\i\t\o\4\o\j\c\p\p\9\t\e\j\n\8\e\7\g\i\a\m\e\7\t\8\2\t\o\6\y\0\s\q\e\g\0\z\b\b\f\a\9\x ]] 00:06:23.395 20:42:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.395 20:42:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:23.395 [2024-07-15 20:42:45.270292] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:23.395 [2024-07-15 20:42:45.270363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63174 ] 00:06:23.652 [2024-07-15 20:42:45.410692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.652 [2024-07-15 20:42:45.500077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.652 [2024-07-15 20:42:45.540946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.911  Copying: 512/512 [B] (average 500 kBps) 00:06:23.911 00:06:23.911 20:42:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ l6x9ie6o8js3wnx7b9p8dzkcep2ahil48tpw5fs0f2ilin7dvq99kpsgjpmvb4hn0pp572qeiy7y9egddze84n1bvvqr0txrfpwidrqblv1amuoyyv2dzpdhrvz3diwj8ttjw46jpscdyev9epna3d2x792pv2kcn2ingr6ff7brj7bp2y9qigfnlaln3v1ezpmndif9fkntdxmzbxthce8xwqmixpqkthmf25rjy5jhy0jo9q11oojeroz1l7r3w0ap61qrw3sk70no4tdlcjq4yvfak7vsr6ohwuppbote8fiod73qhvbd9nsu4ro6a8dkpmp4q9gvsj2jsqtpzr7ltdilq4zp3w0shrq125ixell5zhds3jd3g42kpznfbr5omtlr9hpwpi6hhl0ugan1gvr4vhf0c1xm3r1uop9me3pgpvyxdm7kz9jx4pcfdvqhgy1xmnsb5ms6sy9huito4ojcpp9tejn8e7giame7t82to6y0sqeg0zbbfa9x == \l\6\x\9\i\e\6\o\8\j\s\3\w\n\x\7\b\9\p\8\d\z\k\c\e\p\2\a\h\i\l\4\8\t\p\w\5\f\s\0\f\2\i\l\i\n\7\d\v\q\9\9\k\p\s\g\j\p\m\v\b\4\h\n\0\p\p\5\7\2\q\e\i\y\7\y\9\e\g\d\d\z\e\8\4\n\1\b\v\v\q\r\0\t\x\r\f\p\w\i\d\r\q\b\l\v\1\a\m\u\o\y\y\v\2\d\z\p\d\h\r\v\z\3\d\i\w\j\8\t\t\j\w\4\6\j\p\s\c\d\y\e\v\9\e\p\n\a\3\d\2\x\7\9\2\p\v\2\k\c\n\2\i\n\g\r\6\f\f\7\b\r\j\7\b\p\2\y\9\q\i\g\f\n\l\a\l\n\3\v\1\e\z\p\m\n\d\i\f\9\f\k\n\t\d\x\m\z\b\x\t\h\c\e\8\x\w\q\m\i\x\p\q\k\t\h\m\f\2\5\r\j\y\5\j\h\y\0\j\o\9\q\1\1\o\o\j\e\r\o\z\1\l\7\r\3\w\0\a\p\6\1\q\r\w\3\s\k\7\0\n\o\4\t\d\l\c\j\q\4\y\v\f\a\k\7\v\s\r\6\o\h\w\u\p\p\b\o\t\e\8\f\i\o\d\7\3\q\h\v\b\d\9\n\s\u\4\r\o\6\a\8\d\k\p\m\p\4\q\9\g\v\s\j\2\j\s\q\t\p\z\r\7\l\t\d\i\l\q\4\z\p\3\w\0\s\h\r\q\1\2\5\i\x\e\l\l\5\z\h\d\s\3\j\d\3\g\4\2\k\p\z\n\f\b\r\5\o\m\t\l\r\9\h\p\w\p\i\6\h\h\l\0\u\g\a\n\1\g\v\r\4\v\h\f\0\c\1\x\m\3\r\1\u\o\p\9\m\e\3\p\g\p\v\y\x\d\m\7\k\z\9\j\x\4\p\c\f\d\v\q\h\g\y\1\x\m\n\s\b\5\m\s\6\s\y\9\h\u\i\t\o\4\o\j\c\p\p\9\t\e\j\n\8\e\7\g\i\a\m\e\7\t\8\2\t\o\6\y\0\s\q\e\g\0\z\b\b\f\a\9\x ]] 00:06:23.911 20:42:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:23.911 20:42:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:23.911 20:42:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:23.911 20:42:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:23.911 20:42:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.911 20:42:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:23.911 [2024-07-15 20:42:45.792067] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:23.911 [2024-07-15 20:42:45.792138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63189 ] 00:06:24.169 [2024-07-15 20:42:45.922230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.169 [2024-07-15 20:42:46.005446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.169 [2024-07-15 20:42:46.046375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.450  Copying: 512/512 [B] (average 500 kBps) 00:06:24.450 00:06:24.450 20:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n67d6w98dld5p97ig25hclzsk05f6c5sj4a9m9p92sl5gcuc8ljgfydrsrtrwubbfxc2wnzk5q6pfv89wznkygj86902dc7iq7mq9cjyf8a19j40ba1k3y56gc3exdq6u4q5voyfkb6xyk47pzl1twxoib1s4kb74rabazbtuwvhe3ngyp1lw4yexh72e7d71rblnthjhtvwskdr5mgym6b78x4t9cyrusnch47zft0grh12bwphkxuz1pq6k9fapdkpeyvj80hgot6cexd5f93e0egmyv50zmprv490l9vxv2xt6m7udknjv5t3tk5yyzzuxvdmk3hic2kyo2fy310h9eb1yugcppwvd3dpkrk15kgmdbkivzuq75b1vtjn7xm5vu8svzb2in8s51dr1mms18gmzxhhyrcmb7n9i49i9244vf147hbvto3duivihmz5ybopryi3b7jnc3dzx39j8f85ekhn9dxccg3gz3lrjxphwc3tiks20u6y7r1y == \n\6\7\d\6\w\9\8\d\l\d\5\p\9\7\i\g\2\5\h\c\l\z\s\k\0\5\f\6\c\5\s\j\4\a\9\m\9\p\9\2\s\l\5\g\c\u\c\8\l\j\g\f\y\d\r\s\r\t\r\w\u\b\b\f\x\c\2\w\n\z\k\5\q\6\p\f\v\8\9\w\z\n\k\y\g\j\8\6\9\0\2\d\c\7\i\q\7\m\q\9\c\j\y\f\8\a\1\9\j\4\0\b\a\1\k\3\y\5\6\g\c\3\e\x\d\q\6\u\4\q\5\v\o\y\f\k\b\6\x\y\k\4\7\p\z\l\1\t\w\x\o\i\b\1\s\4\k\b\7\4\r\a\b\a\z\b\t\u\w\v\h\e\3\n\g\y\p\1\l\w\4\y\e\x\h\7\2\e\7\d\7\1\r\b\l\n\t\h\j\h\t\v\w\s\k\d\r\5\m\g\y\m\6\b\7\8\x\4\t\9\c\y\r\u\s\n\c\h\4\7\z\f\t\0\g\r\h\1\2\b\w\p\h\k\x\u\z\1\p\q\6\k\9\f\a\p\d\k\p\e\y\v\j\8\0\h\g\o\t\6\c\e\x\d\5\f\9\3\e\0\e\g\m\y\v\5\0\z\m\p\r\v\4\9\0\l\9\v\x\v\2\x\t\6\m\7\u\d\k\n\j\v\5\t\3\t\k\5\y\y\z\z\u\x\v\d\m\k\3\h\i\c\2\k\y\o\2\f\y\3\1\0\h\9\e\b\1\y\u\g\c\p\p\w\v\d\3\d\p\k\r\k\1\5\k\g\m\d\b\k\i\v\z\u\q\7\5\b\1\v\t\j\n\7\x\m\5\v\u\8\s\v\z\b\2\i\n\8\s\5\1\d\r\1\m\m\s\1\8\g\m\z\x\h\h\y\r\c\m\b\7\n\9\i\4\9\i\9\2\4\4\v\f\1\4\7\h\b\v\t\o\3\d\u\i\v\i\h\m\z\5\y\b\o\p\r\y\i\3\b\7\j\n\c\3\d\z\x\3\9\j\8\f\8\5\e\k\h\n\9\d\x\c\c\g\3\g\z\3\l\r\j\x\p\h\w\c\3\t\i\k\s\2\0\u\6\y\7\r\1\y ]] 00:06:24.450 20:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.450 20:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:24.450 [2024-07-15 20:42:46.279072] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:24.450 [2024-07-15 20:42:46.279144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63193 ] 00:06:24.708 [2024-07-15 20:42:46.412586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.708 [2024-07-15 20:42:46.502552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.708 [2024-07-15 20:42:46.543549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.967  Copying: 512/512 [B] (average 500 kBps) 00:06:24.967 00:06:24.967 20:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n67d6w98dld5p97ig25hclzsk05f6c5sj4a9m9p92sl5gcuc8ljgfydrsrtrwubbfxc2wnzk5q6pfv89wznkygj86902dc7iq7mq9cjyf8a19j40ba1k3y56gc3exdq6u4q5voyfkb6xyk47pzl1twxoib1s4kb74rabazbtuwvhe3ngyp1lw4yexh72e7d71rblnthjhtvwskdr5mgym6b78x4t9cyrusnch47zft0grh12bwphkxuz1pq6k9fapdkpeyvj80hgot6cexd5f93e0egmyv50zmprv490l9vxv2xt6m7udknjv5t3tk5yyzzuxvdmk3hic2kyo2fy310h9eb1yugcppwvd3dpkrk15kgmdbkivzuq75b1vtjn7xm5vu8svzb2in8s51dr1mms18gmzxhhyrcmb7n9i49i9244vf147hbvto3duivihmz5ybopryi3b7jnc3dzx39j8f85ekhn9dxccg3gz3lrjxphwc3tiks20u6y7r1y == \n\6\7\d\6\w\9\8\d\l\d\5\p\9\7\i\g\2\5\h\c\l\z\s\k\0\5\f\6\c\5\s\j\4\a\9\m\9\p\9\2\s\l\5\g\c\u\c\8\l\j\g\f\y\d\r\s\r\t\r\w\u\b\b\f\x\c\2\w\n\z\k\5\q\6\p\f\v\8\9\w\z\n\k\y\g\j\8\6\9\0\2\d\c\7\i\q\7\m\q\9\c\j\y\f\8\a\1\9\j\4\0\b\a\1\k\3\y\5\6\g\c\3\e\x\d\q\6\u\4\q\5\v\o\y\f\k\b\6\x\y\k\4\7\p\z\l\1\t\w\x\o\i\b\1\s\4\k\b\7\4\r\a\b\a\z\b\t\u\w\v\h\e\3\n\g\y\p\1\l\w\4\y\e\x\h\7\2\e\7\d\7\1\r\b\l\n\t\h\j\h\t\v\w\s\k\d\r\5\m\g\y\m\6\b\7\8\x\4\t\9\c\y\r\u\s\n\c\h\4\7\z\f\t\0\g\r\h\1\2\b\w\p\h\k\x\u\z\1\p\q\6\k\9\f\a\p\d\k\p\e\y\v\j\8\0\h\g\o\t\6\c\e\x\d\5\f\9\3\e\0\e\g\m\y\v\5\0\z\m\p\r\v\4\9\0\l\9\v\x\v\2\x\t\6\m\7\u\d\k\n\j\v\5\t\3\t\k\5\y\y\z\z\u\x\v\d\m\k\3\h\i\c\2\k\y\o\2\f\y\3\1\0\h\9\e\b\1\y\u\g\c\p\p\w\v\d\3\d\p\k\r\k\1\5\k\g\m\d\b\k\i\v\z\u\q\7\5\b\1\v\t\j\n\7\x\m\5\v\u\8\s\v\z\b\2\i\n\8\s\5\1\d\r\1\m\m\s\1\8\g\m\z\x\h\h\y\r\c\m\b\7\n\9\i\4\9\i\9\2\4\4\v\f\1\4\7\h\b\v\t\o\3\d\u\i\v\i\h\m\z\5\y\b\o\p\r\y\i\3\b\7\j\n\c\3\d\z\x\3\9\j\8\f\8\5\e\k\h\n\9\d\x\c\c\g\3\g\z\3\l\r\j\x\p\h\w\c\3\t\i\k\s\2\0\u\6\y\7\r\1\y ]] 00:06:24.967 20:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.967 20:42:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:24.967 [2024-07-15 20:42:46.776743] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:24.967 [2024-07-15 20:42:46.776813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63208 ] 00:06:25.226 [2024-07-15 20:42:46.915468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.226 [2024-07-15 20:42:46.996221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.226 [2024-07-15 20:42:47.037110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.484  Copying: 512/512 [B] (average 250 kBps) 00:06:25.484 00:06:25.484 20:42:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n67d6w98dld5p97ig25hclzsk05f6c5sj4a9m9p92sl5gcuc8ljgfydrsrtrwubbfxc2wnzk5q6pfv89wznkygj86902dc7iq7mq9cjyf8a19j40ba1k3y56gc3exdq6u4q5voyfkb6xyk47pzl1twxoib1s4kb74rabazbtuwvhe3ngyp1lw4yexh72e7d71rblnthjhtvwskdr5mgym6b78x4t9cyrusnch47zft0grh12bwphkxuz1pq6k9fapdkpeyvj80hgot6cexd5f93e0egmyv50zmprv490l9vxv2xt6m7udknjv5t3tk5yyzzuxvdmk3hic2kyo2fy310h9eb1yugcppwvd3dpkrk15kgmdbkivzuq75b1vtjn7xm5vu8svzb2in8s51dr1mms18gmzxhhyrcmb7n9i49i9244vf147hbvto3duivihmz5ybopryi3b7jnc3dzx39j8f85ekhn9dxccg3gz3lrjxphwc3tiks20u6y7r1y == \n\6\7\d\6\w\9\8\d\l\d\5\p\9\7\i\g\2\5\h\c\l\z\s\k\0\5\f\6\c\5\s\j\4\a\9\m\9\p\9\2\s\l\5\g\c\u\c\8\l\j\g\f\y\d\r\s\r\t\r\w\u\b\b\f\x\c\2\w\n\z\k\5\q\6\p\f\v\8\9\w\z\n\k\y\g\j\8\6\9\0\2\d\c\7\i\q\7\m\q\9\c\j\y\f\8\a\1\9\j\4\0\b\a\1\k\3\y\5\6\g\c\3\e\x\d\q\6\u\4\q\5\v\o\y\f\k\b\6\x\y\k\4\7\p\z\l\1\t\w\x\o\i\b\1\s\4\k\b\7\4\r\a\b\a\z\b\t\u\w\v\h\e\3\n\g\y\p\1\l\w\4\y\e\x\h\7\2\e\7\d\7\1\r\b\l\n\t\h\j\h\t\v\w\s\k\d\r\5\m\g\y\m\6\b\7\8\x\4\t\9\c\y\r\u\s\n\c\h\4\7\z\f\t\0\g\r\h\1\2\b\w\p\h\k\x\u\z\1\p\q\6\k\9\f\a\p\d\k\p\e\y\v\j\8\0\h\g\o\t\6\c\e\x\d\5\f\9\3\e\0\e\g\m\y\v\5\0\z\m\p\r\v\4\9\0\l\9\v\x\v\2\x\t\6\m\7\u\d\k\n\j\v\5\t\3\t\k\5\y\y\z\z\u\x\v\d\m\k\3\h\i\c\2\k\y\o\2\f\y\3\1\0\h\9\e\b\1\y\u\g\c\p\p\w\v\d\3\d\p\k\r\k\1\5\k\g\m\d\b\k\i\v\z\u\q\7\5\b\1\v\t\j\n\7\x\m\5\v\u\8\s\v\z\b\2\i\n\8\s\5\1\d\r\1\m\m\s\1\8\g\m\z\x\h\h\y\r\c\m\b\7\n\9\i\4\9\i\9\2\4\4\v\f\1\4\7\h\b\v\t\o\3\d\u\i\v\i\h\m\z\5\y\b\o\p\r\y\i\3\b\7\j\n\c\3\d\z\x\3\9\j\8\f\8\5\e\k\h\n\9\d\x\c\c\g\3\g\z\3\l\r\j\x\p\h\w\c\3\t\i\k\s\2\0\u\6\y\7\r\1\y ]] 00:06:25.484 20:42:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.484 20:42:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:25.484 [2024-07-15 20:42:47.282440] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:25.484 [2024-07-15 20:42:47.282510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63212 ] 00:06:25.744 [2024-07-15 20:42:47.422626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.744 [2024-07-15 20:42:47.510307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.744 [2024-07-15 20:42:47.551849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.003  Copying: 512/512 [B] (average 125 kBps) 00:06:26.003 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ n67d6w98dld5p97ig25hclzsk05f6c5sj4a9m9p92sl5gcuc8ljgfydrsrtrwubbfxc2wnzk5q6pfv89wznkygj86902dc7iq7mq9cjyf8a19j40ba1k3y56gc3exdq6u4q5voyfkb6xyk47pzl1twxoib1s4kb74rabazbtuwvhe3ngyp1lw4yexh72e7d71rblnthjhtvwskdr5mgym6b78x4t9cyrusnch47zft0grh12bwphkxuz1pq6k9fapdkpeyvj80hgot6cexd5f93e0egmyv50zmprv490l9vxv2xt6m7udknjv5t3tk5yyzzuxvdmk3hic2kyo2fy310h9eb1yugcppwvd3dpkrk15kgmdbkivzuq75b1vtjn7xm5vu8svzb2in8s51dr1mms18gmzxhhyrcmb7n9i49i9244vf147hbvto3duivihmz5ybopryi3b7jnc3dzx39j8f85ekhn9dxccg3gz3lrjxphwc3tiks20u6y7r1y == \n\6\7\d\6\w\9\8\d\l\d\5\p\9\7\i\g\2\5\h\c\l\z\s\k\0\5\f\6\c\5\s\j\4\a\9\m\9\p\9\2\s\l\5\g\c\u\c\8\l\j\g\f\y\d\r\s\r\t\r\w\u\b\b\f\x\c\2\w\n\z\k\5\q\6\p\f\v\8\9\w\z\n\k\y\g\j\8\6\9\0\2\d\c\7\i\q\7\m\q\9\c\j\y\f\8\a\1\9\j\4\0\b\a\1\k\3\y\5\6\g\c\3\e\x\d\q\6\u\4\q\5\v\o\y\f\k\b\6\x\y\k\4\7\p\z\l\1\t\w\x\o\i\b\1\s\4\k\b\7\4\r\a\b\a\z\b\t\u\w\v\h\e\3\n\g\y\p\1\l\w\4\y\e\x\h\7\2\e\7\d\7\1\r\b\l\n\t\h\j\h\t\v\w\s\k\d\r\5\m\g\y\m\6\b\7\8\x\4\t\9\c\y\r\u\s\n\c\h\4\7\z\f\t\0\g\r\h\1\2\b\w\p\h\k\x\u\z\1\p\q\6\k\9\f\a\p\d\k\p\e\y\v\j\8\0\h\g\o\t\6\c\e\x\d\5\f\9\3\e\0\e\g\m\y\v\5\0\z\m\p\r\v\4\9\0\l\9\v\x\v\2\x\t\6\m\7\u\d\k\n\j\v\5\t\3\t\k\5\y\y\z\z\u\x\v\d\m\k\3\h\i\c\2\k\y\o\2\f\y\3\1\0\h\9\e\b\1\y\u\g\c\p\p\w\v\d\3\d\p\k\r\k\1\5\k\g\m\d\b\k\i\v\z\u\q\7\5\b\1\v\t\j\n\7\x\m\5\v\u\8\s\v\z\b\2\i\n\8\s\5\1\d\r\1\m\m\s\1\8\g\m\z\x\h\h\y\r\c\m\b\7\n\9\i\4\9\i\9\2\4\4\v\f\1\4\7\h\b\v\t\o\3\d\u\i\v\i\h\m\z\5\y\b\o\p\r\y\i\3\b\7\j\n\c\3\d\z\x\3\9\j\8\f\8\5\e\k\h\n\9\d\x\c\c\g\3\g\z\3\l\r\j\x\p\h\w\c\3\t\i\k\s\2\0\u\6\y\7\r\1\y ]] 00:06:26.003 00:06:26.003 real 0m4.047s 00:06:26.003 user 0m2.252s 00:06:26.003 sys 0m1.771s 00:06:26.003 ************************************ 00:06:26.003 END TEST dd_flags_misc 00:06:26.003 ************************************ 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:26.003 * Second test run, disabling liburing, forcing AIO 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:26.003 ************************************ 00:06:26.003 START TEST dd_flag_append_forced_aio 00:06:26.003 ************************************ 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=n25u95qlsi8hmvo4vu66b33mnwzd02zz 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=sb0ddaj1s56a95rgfh608ndo109uac0r 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s n25u95qlsi8hmvo4vu66b33mnwzd02zz 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s sb0ddaj1s56a95rgfh608ndo109uac0r 00:06:26.003 20:42:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:26.003 [2024-07-15 20:42:47.889239] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:26.003 [2024-07-15 20:42:47.889301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63246 ] 00:06:26.263 [2024-07-15 20:42:48.029713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.263 [2024-07-15 20:42:48.117880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.263 [2024-07-15 20:42:48.158601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.544  Copying: 32/32 [B] (average 31 kBps) 00:06:26.544 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ sb0ddaj1s56a95rgfh608ndo109uac0rn25u95qlsi8hmvo4vu66b33mnwzd02zz == \s\b\0\d\d\a\j\1\s\5\6\a\9\5\r\g\f\h\6\0\8\n\d\o\1\0\9\u\a\c\0\r\n\2\5\u\9\5\q\l\s\i\8\h\m\v\o\4\v\u\6\6\b\3\3\m\n\w\z\d\0\2\z\z ]] 00:06:26.544 00:06:26.544 real 0m0.548s 00:06:26.544 user 0m0.293s 00:06:26.544 sys 0m0.136s 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.544 ************************************ 00:06:26.544 END TEST dd_flag_append_forced_aio 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:26.544 ************************************ 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:26.544 ************************************ 00:06:26.544 START TEST dd_flag_directory_forced_aio 00:06:26.544 ************************************ 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.544 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.803 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.803 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.803 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.803 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:26.804 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:26.804 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.804 [2024-07-15 20:42:48.505417] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:26.804 [2024-07-15 20:42:48.505481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63267 ] 00:06:26.804 [2024-07-15 20:42:48.646607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.063 [2024-07-15 20:42:48.721715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.063 [2024-07-15 20:42:48.762648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.063 [2024-07-15 20:42:48.788446] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:27.063 [2024-07-15 20:42:48.788494] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:27.063 [2024-07-15 20:42:48.788506] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.063 [2024-07-15 20:42:48.877815] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:27.063 20:42:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:27.323 [2024-07-15 20:42:49.007491] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:27.323 [2024-07-15 20:42:49.007574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63285 ] 00:06:27.323 [2024-07-15 20:42:49.145894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.323 [2024-07-15 20:42:49.230577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.582 [2024-07-15 20:42:49.271341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.582 [2024-07-15 20:42:49.297230] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:27.582 [2024-07-15 20:42:49.297272] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:27.582 [2024-07-15 20:42:49.297285] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.582 [2024-07-15 20:42:49.386464] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:27.582 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:27.582 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.582 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:27.582 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:27.582 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:27.582 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.582 00:06:27.582 real 0m1.024s 00:06:27.582 user 0m0.560s 00:06:27.582 sys 0m0.257s 00:06:27.582 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.582 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:27.582 ************************************ 00:06:27.582 END TEST dd_flag_directory_forced_aio 00:06:27.582 ************************************ 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:27.841 ************************************ 00:06:27.841 START TEST dd_flag_nofollow_forced_aio 00:06:27.841 ************************************ 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:27.841 20:42:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.841 [2024-07-15 20:42:49.612723] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:27.841 [2024-07-15 20:42:49.612790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63308 ] 00:06:28.100 [2024-07-15 20:42:49.753542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.100 [2024-07-15 20:42:49.836728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.100 [2024-07-15 20:42:49.877593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.100 [2024-07-15 20:42:49.906026] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:28.100 [2024-07-15 20:42:49.906073] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:28.100 [2024-07-15 20:42:49.906087] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.100 [2024-07-15 20:42:49.996927] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:28.360 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:28.360 [2024-07-15 20:42:50.153449] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:28.360 [2024-07-15 20:42:50.153547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63323 ] 00:06:28.619 [2024-07-15 20:42:50.301447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.619 [2024-07-15 20:42:50.389590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.619 [2024-07-15 20:42:50.430318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.619 [2024-07-15 20:42:50.456408] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:28.619 [2024-07-15 20:42:50.456454] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:28.619 [2024-07-15 20:42:50.456468] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.878 [2024-07-15 20:42:50.545862] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:28.878 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:28.878 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.878 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:28.878 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:28.878 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:28.878 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.878 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:28.878 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:28.878 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:28.878 20:42:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.878 [2024-07-15 20:42:50.683890] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:28.878 [2024-07-15 20:42:50.684068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63326 ] 00:06:29.136 [2024-07-15 20:42:50.824245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.136 [2024-07-15 20:42:50.906495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.136 [2024-07-15 20:42:50.947519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.395  Copying: 512/512 [B] (average 500 kBps) 00:06:29.395 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ccbfjv16s0284o2c9ywy9r5tfsoiyio8t4r6un80xbpr3q5wolpdtsracmda6t2po12w63wzd40aknlrvx0e4hsa4iplm07wxbhll7eq5y92n0zxqtmmbfd429lvcnltqs3f12nuny7mmzzjvnrrw4vd8xs3sh4sj1xe8nec496b4384ejpdkd6wtkl3wo9spw0fxv63usykrrafnc3ud3rkkv9lao6n6hpxahnxwd6dhfws86sbz4mjbzlkw0ksbxwfi1tvse1bm5jevybrqoef5lr19n67h0casyiejdr7b93416z8s53quz8bjl15qs012v6ul7wfwqbqb3so27gy2uom75c1evf0t0odthre8ve3flz4sox20ongakqsnz7p9lbs3yp3e2j1dbegwibiofiigjl41yl3o89c1psuu2ln8e9tavfv1ngfhcls3o4tz9px2ni0qic4aexzf8inazri1wf9prihs1e77ltv5j0q9dip00mclsz1fim1 == \c\c\b\f\j\v\1\6\s\0\2\8\4\o\2\c\9\y\w\y\9\r\5\t\f\s\o\i\y\i\o\8\t\4\r\6\u\n\8\0\x\b\p\r\3\q\5\w\o\l\p\d\t\s\r\a\c\m\d\a\6\t\2\p\o\1\2\w\6\3\w\z\d\4\0\a\k\n\l\r\v\x\0\e\4\h\s\a\4\i\p\l\m\0\7\w\x\b\h\l\l\7\e\q\5\y\9\2\n\0\z\x\q\t\m\m\b\f\d\4\2\9\l\v\c\n\l\t\q\s\3\f\1\2\n\u\n\y\7\m\m\z\z\j\v\n\r\r\w\4\v\d\8\x\s\3\s\h\4\s\j\1\x\e\8\n\e\c\4\9\6\b\4\3\8\4\e\j\p\d\k\d\6\w\t\k\l\3\w\o\9\s\p\w\0\f\x\v\6\3\u\s\y\k\r\r\a\f\n\c\3\u\d\3\r\k\k\v\9\l\a\o\6\n\6\h\p\x\a\h\n\x\w\d\6\d\h\f\w\s\8\6\s\b\z\4\m\j\b\z\l\k\w\0\k\s\b\x\w\f\i\1\t\v\s\e\1\b\m\5\j\e\v\y\b\r\q\o\e\f\5\l\r\1\9\n\6\7\h\0\c\a\s\y\i\e\j\d\r\7\b\9\3\4\1\6\z\8\s\5\3\q\u\z\8\b\j\l\1\5\q\s\0\1\2\v\6\u\l\7\w\f\w\q\b\q\b\3\s\o\2\7\g\y\2\u\o\m\7\5\c\1\e\v\f\0\t\0\o\d\t\h\r\e\8\v\e\3\f\l\z\4\s\o\x\2\0\o\n\g\a\k\q\s\n\z\7\p\9\l\b\s\3\y\p\3\e\2\j\1\d\b\e\g\w\i\b\i\o\f\i\i\g\j\l\4\1\y\l\3\o\8\9\c\1\p\s\u\u\2\l\n\8\e\9\t\a\v\f\v\1\n\g\f\h\c\l\s\3\o\4\t\z\9\p\x\2\n\i\0\q\i\c\4\a\e\x\z\f\8\i\n\a\z\r\i\1\w\f\9\p\r\i\h\s\1\e\7\7\l\t\v\5\j\0\q\9\d\i\p\0\0\m\c\l\s\z\1\f\i\m\1 ]] 00:06:29.395 00:06:29.395 real 0m1.625s 00:06:29.395 user 0m0.894s 00:06:29.395 sys 0m0.399s 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.395 ************************************ 00:06:29.395 END TEST dd_flag_nofollow_forced_aio 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:29.395 ************************************ 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:29.395 ************************************ 00:06:29.395 START TEST dd_flag_noatime_forced_aio 00:06:29.395 ************************************ 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721076170 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721076171 00:06:29.395 20:42:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:30.771 20:42:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.771 [2024-07-15 20:42:52.321973] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:30.771 [2024-07-15 20:42:52.322053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63371 ] 00:06:30.771 [2024-07-15 20:42:52.455874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.771 [2024-07-15 20:42:52.541415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.771 [2024-07-15 20:42:52.582242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.029  Copying: 512/512 [B] (average 500 kBps) 00:06:31.029 00:06:31.030 20:42:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.030 20:42:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721076170 )) 00:06:31.030 20:42:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.030 20:42:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721076171 )) 00:06:31.030 20:42:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.030 [2024-07-15 20:42:52.859363] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:31.030 [2024-07-15 20:42:52.859428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63387 ] 00:06:31.319 [2024-07-15 20:42:52.999302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.319 [2024-07-15 20:42:53.082091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.319 [2024-07-15 20:42:53.122914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.576  Copying: 512/512 [B] (average 500 kBps) 00:06:31.576 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721076173 )) 00:06:31.576 00:06:31.576 real 0m2.110s 00:06:31.576 user 0m0.603s 00:06:31.576 sys 0m0.262s 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.576 ************************************ 00:06:31.576 END TEST dd_flag_noatime_forced_aio 00:06:31.576 ************************************ 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:31.576 ************************************ 00:06:31.576 START TEST dd_flags_misc_forced_aio 00:06:31.576 ************************************ 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.576 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:31.576 [2024-07-15 20:42:53.471033] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:31.576 [2024-07-15 20:42:53.471241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63409 ] 00:06:31.833 [2024-07-15 20:42:53.611447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.833 [2024-07-15 20:42:53.703737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.090 [2024-07-15 20:42:53.745068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.090  Copying: 512/512 [B] (average 500 kBps) 00:06:32.090 00:06:32.090 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pj0d6g0ga6cm20hp9e6x45qod9ziivog8w7ft1gqqp71558i252cj9hqtns8lvwj31fqbx5gb0ujkdr1jjcujcgo7p8n5huydgkzk1c4jjopg0b4q26c4idrz231nqx9zvjrjbtkflop34y3rdvyymq8yyom8dzcsyg7qb5m8vl17c8p66xiaa6enexyxn0qwi1x3m0i3bw35m4l4693h9s15fm25cd1cry231vtyf9ggyhaxr305dr1e4gvchqwhi4ighhszl5w8cp43dznvibgatuoaiin8g1bj51fzptlz29sgpft2ocmi7vuska22gmb11h1smcrrevkwf9iyuhbbgufec70zh8fnof8635f7e7fgqp2tnotagch2hui3w0f4k912m7bi8ytumtp98l4utveg9y2ix9i7wr68lre2ky3iyhq9zngyq3zb55kx06dg74hnjnwaga9znxusixo2s6d6xba0q2h4kgamvxskx2kx3qmk9jjmv6hdmap == \p\j\0\d\6\g\0\g\a\6\c\m\2\0\h\p\9\e\6\x\4\5\q\o\d\9\z\i\i\v\o\g\8\w\7\f\t\1\g\q\q\p\7\1\5\5\8\i\2\5\2\c\j\9\h\q\t\n\s\8\l\v\w\j\3\1\f\q\b\x\5\g\b\0\u\j\k\d\r\1\j\j\c\u\j\c\g\o\7\p\8\n\5\h\u\y\d\g\k\z\k\1\c\4\j\j\o\p\g\0\b\4\q\2\6\c\4\i\d\r\z\2\3\1\n\q\x\9\z\v\j\r\j\b\t\k\f\l\o\p\3\4\y\3\r\d\v\y\y\m\q\8\y\y\o\m\8\d\z\c\s\y\g\7\q\b\5\m\8\v\l\1\7\c\8\p\6\6\x\i\a\a\6\e\n\e\x\y\x\n\0\q\w\i\1\x\3\m\0\i\3\b\w\3\5\m\4\l\4\6\9\3\h\9\s\1\5\f\m\2\5\c\d\1\c\r\y\2\3\1\v\t\y\f\9\g\g\y\h\a\x\r\3\0\5\d\r\1\e\4\g\v\c\h\q\w\h\i\4\i\g\h\h\s\z\l\5\w\8\c\p\4\3\d\z\n\v\i\b\g\a\t\u\o\a\i\i\n\8\g\1\b\j\5\1\f\z\p\t\l\z\2\9\s\g\p\f\t\2\o\c\m\i\7\v\u\s\k\a\2\2\g\m\b\1\1\h\1\s\m\c\r\r\e\v\k\w\f\9\i\y\u\h\b\b\g\u\f\e\c\7\0\z\h\8\f\n\o\f\8\6\3\5\f\7\e\7\f\g\q\p\2\t\n\o\t\a\g\c\h\2\h\u\i\3\w\0\f\4\k\9\1\2\m\7\b\i\8\y\t\u\m\t\p\9\8\l\4\u\t\v\e\g\9\y\2\i\x\9\i\7\w\r\6\8\l\r\e\2\k\y\3\i\y\h\q\9\z\n\g\y\q\3\z\b\5\5\k\x\0\6\d\g\7\4\h\n\j\n\w\a\g\a\9\z\n\x\u\s\i\x\o\2\s\6\d\6\x\b\a\0\q\2\h\4\k\g\a\m\v\x\s\k\x\2\k\x\3\q\m\k\9\j\j\m\v\6\h\d\m\a\p ]] 00:06:32.090 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.090 20:42:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:32.347 [2024-07-15 20:42:54.021967] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:32.347 [2024-07-15 20:42:54.022043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63422 ] 00:06:32.347 [2024-07-15 20:42:54.162217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.347 [2024-07-15 20:42:54.248635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.605 [2024-07-15 20:42:54.289732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.605  Copying: 512/512 [B] (average 500 kBps) 00:06:32.605 00:06:32.605 20:42:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pj0d6g0ga6cm20hp9e6x45qod9ziivog8w7ft1gqqp71558i252cj9hqtns8lvwj31fqbx5gb0ujkdr1jjcujcgo7p8n5huydgkzk1c4jjopg0b4q26c4idrz231nqx9zvjrjbtkflop34y3rdvyymq8yyom8dzcsyg7qb5m8vl17c8p66xiaa6enexyxn0qwi1x3m0i3bw35m4l4693h9s15fm25cd1cry231vtyf9ggyhaxr305dr1e4gvchqwhi4ighhszl5w8cp43dznvibgatuoaiin8g1bj51fzptlz29sgpft2ocmi7vuska22gmb11h1smcrrevkwf9iyuhbbgufec70zh8fnof8635f7e7fgqp2tnotagch2hui3w0f4k912m7bi8ytumtp98l4utveg9y2ix9i7wr68lre2ky3iyhq9zngyq3zb55kx06dg74hnjnwaga9znxusixo2s6d6xba0q2h4kgamvxskx2kx3qmk9jjmv6hdmap == \p\j\0\d\6\g\0\g\a\6\c\m\2\0\h\p\9\e\6\x\4\5\q\o\d\9\z\i\i\v\o\g\8\w\7\f\t\1\g\q\q\p\7\1\5\5\8\i\2\5\2\c\j\9\h\q\t\n\s\8\l\v\w\j\3\1\f\q\b\x\5\g\b\0\u\j\k\d\r\1\j\j\c\u\j\c\g\o\7\p\8\n\5\h\u\y\d\g\k\z\k\1\c\4\j\j\o\p\g\0\b\4\q\2\6\c\4\i\d\r\z\2\3\1\n\q\x\9\z\v\j\r\j\b\t\k\f\l\o\p\3\4\y\3\r\d\v\y\y\m\q\8\y\y\o\m\8\d\z\c\s\y\g\7\q\b\5\m\8\v\l\1\7\c\8\p\6\6\x\i\a\a\6\e\n\e\x\y\x\n\0\q\w\i\1\x\3\m\0\i\3\b\w\3\5\m\4\l\4\6\9\3\h\9\s\1\5\f\m\2\5\c\d\1\c\r\y\2\3\1\v\t\y\f\9\g\g\y\h\a\x\r\3\0\5\d\r\1\e\4\g\v\c\h\q\w\h\i\4\i\g\h\h\s\z\l\5\w\8\c\p\4\3\d\z\n\v\i\b\g\a\t\u\o\a\i\i\n\8\g\1\b\j\5\1\f\z\p\t\l\z\2\9\s\g\p\f\t\2\o\c\m\i\7\v\u\s\k\a\2\2\g\m\b\1\1\h\1\s\m\c\r\r\e\v\k\w\f\9\i\y\u\h\b\b\g\u\f\e\c\7\0\z\h\8\f\n\o\f\8\6\3\5\f\7\e\7\f\g\q\p\2\t\n\o\t\a\g\c\h\2\h\u\i\3\w\0\f\4\k\9\1\2\m\7\b\i\8\y\t\u\m\t\p\9\8\l\4\u\t\v\e\g\9\y\2\i\x\9\i\7\w\r\6\8\l\r\e\2\k\y\3\i\y\h\q\9\z\n\g\y\q\3\z\b\5\5\k\x\0\6\d\g\7\4\h\n\j\n\w\a\g\a\9\z\n\x\u\s\i\x\o\2\s\6\d\6\x\b\a\0\q\2\h\4\k\g\a\m\v\x\s\k\x\2\k\x\3\q\m\k\9\j\j\m\v\6\h\d\m\a\p ]] 00:06:32.605 20:42:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.605 20:42:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:32.864 [2024-07-15 20:42:54.553766] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:32.864 [2024-07-15 20:42:54.553874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63424 ] 00:06:32.864 [2024-07-15 20:42:54.703366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.122 [2024-07-15 20:42:54.797814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.122 [2024-07-15 20:42:54.839215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.380  Copying: 512/512 [B] (average 125 kBps) 00:06:33.380 00:06:33.380 20:42:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pj0d6g0ga6cm20hp9e6x45qod9ziivog8w7ft1gqqp71558i252cj9hqtns8lvwj31fqbx5gb0ujkdr1jjcujcgo7p8n5huydgkzk1c4jjopg0b4q26c4idrz231nqx9zvjrjbtkflop34y3rdvyymq8yyom8dzcsyg7qb5m8vl17c8p66xiaa6enexyxn0qwi1x3m0i3bw35m4l4693h9s15fm25cd1cry231vtyf9ggyhaxr305dr1e4gvchqwhi4ighhszl5w8cp43dznvibgatuoaiin8g1bj51fzptlz29sgpft2ocmi7vuska22gmb11h1smcrrevkwf9iyuhbbgufec70zh8fnof8635f7e7fgqp2tnotagch2hui3w0f4k912m7bi8ytumtp98l4utveg9y2ix9i7wr68lre2ky3iyhq9zngyq3zb55kx06dg74hnjnwaga9znxusixo2s6d6xba0q2h4kgamvxskx2kx3qmk9jjmv6hdmap == \p\j\0\d\6\g\0\g\a\6\c\m\2\0\h\p\9\e\6\x\4\5\q\o\d\9\z\i\i\v\o\g\8\w\7\f\t\1\g\q\q\p\7\1\5\5\8\i\2\5\2\c\j\9\h\q\t\n\s\8\l\v\w\j\3\1\f\q\b\x\5\g\b\0\u\j\k\d\r\1\j\j\c\u\j\c\g\o\7\p\8\n\5\h\u\y\d\g\k\z\k\1\c\4\j\j\o\p\g\0\b\4\q\2\6\c\4\i\d\r\z\2\3\1\n\q\x\9\z\v\j\r\j\b\t\k\f\l\o\p\3\4\y\3\r\d\v\y\y\m\q\8\y\y\o\m\8\d\z\c\s\y\g\7\q\b\5\m\8\v\l\1\7\c\8\p\6\6\x\i\a\a\6\e\n\e\x\y\x\n\0\q\w\i\1\x\3\m\0\i\3\b\w\3\5\m\4\l\4\6\9\3\h\9\s\1\5\f\m\2\5\c\d\1\c\r\y\2\3\1\v\t\y\f\9\g\g\y\h\a\x\r\3\0\5\d\r\1\e\4\g\v\c\h\q\w\h\i\4\i\g\h\h\s\z\l\5\w\8\c\p\4\3\d\z\n\v\i\b\g\a\t\u\o\a\i\i\n\8\g\1\b\j\5\1\f\z\p\t\l\z\2\9\s\g\p\f\t\2\o\c\m\i\7\v\u\s\k\a\2\2\g\m\b\1\1\h\1\s\m\c\r\r\e\v\k\w\f\9\i\y\u\h\b\b\g\u\f\e\c\7\0\z\h\8\f\n\o\f\8\6\3\5\f\7\e\7\f\g\q\p\2\t\n\o\t\a\g\c\h\2\h\u\i\3\w\0\f\4\k\9\1\2\m\7\b\i\8\y\t\u\m\t\p\9\8\l\4\u\t\v\e\g\9\y\2\i\x\9\i\7\w\r\6\8\l\r\e\2\k\y\3\i\y\h\q\9\z\n\g\y\q\3\z\b\5\5\k\x\0\6\d\g\7\4\h\n\j\n\w\a\g\a\9\z\n\x\u\s\i\x\o\2\s\6\d\6\x\b\a\0\q\2\h\4\k\g\a\m\v\x\s\k\x\2\k\x\3\q\m\k\9\j\j\m\v\6\h\d\m\a\p ]] 00:06:33.380 20:42:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:33.380 20:42:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:33.380 [2024-07-15 20:42:55.107470] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:33.380 [2024-07-15 20:42:55.107538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63437 ] 00:06:33.380 [2024-07-15 20:42:55.247130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.638 [2024-07-15 20:42:55.341222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.638 [2024-07-15 20:42:55.382096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.896  Copying: 512/512 [B] (average 500 kBps) 00:06:33.896 00:06:33.896 20:42:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ pj0d6g0ga6cm20hp9e6x45qod9ziivog8w7ft1gqqp71558i252cj9hqtns8lvwj31fqbx5gb0ujkdr1jjcujcgo7p8n5huydgkzk1c4jjopg0b4q26c4idrz231nqx9zvjrjbtkflop34y3rdvyymq8yyom8dzcsyg7qb5m8vl17c8p66xiaa6enexyxn0qwi1x3m0i3bw35m4l4693h9s15fm25cd1cry231vtyf9ggyhaxr305dr1e4gvchqwhi4ighhszl5w8cp43dznvibgatuoaiin8g1bj51fzptlz29sgpft2ocmi7vuska22gmb11h1smcrrevkwf9iyuhbbgufec70zh8fnof8635f7e7fgqp2tnotagch2hui3w0f4k912m7bi8ytumtp98l4utveg9y2ix9i7wr68lre2ky3iyhq9zngyq3zb55kx06dg74hnjnwaga9znxusixo2s6d6xba0q2h4kgamvxskx2kx3qmk9jjmv6hdmap == \p\j\0\d\6\g\0\g\a\6\c\m\2\0\h\p\9\e\6\x\4\5\q\o\d\9\z\i\i\v\o\g\8\w\7\f\t\1\g\q\q\p\7\1\5\5\8\i\2\5\2\c\j\9\h\q\t\n\s\8\l\v\w\j\3\1\f\q\b\x\5\g\b\0\u\j\k\d\r\1\j\j\c\u\j\c\g\o\7\p\8\n\5\h\u\y\d\g\k\z\k\1\c\4\j\j\o\p\g\0\b\4\q\2\6\c\4\i\d\r\z\2\3\1\n\q\x\9\z\v\j\r\j\b\t\k\f\l\o\p\3\4\y\3\r\d\v\y\y\m\q\8\y\y\o\m\8\d\z\c\s\y\g\7\q\b\5\m\8\v\l\1\7\c\8\p\6\6\x\i\a\a\6\e\n\e\x\y\x\n\0\q\w\i\1\x\3\m\0\i\3\b\w\3\5\m\4\l\4\6\9\3\h\9\s\1\5\f\m\2\5\c\d\1\c\r\y\2\3\1\v\t\y\f\9\g\g\y\h\a\x\r\3\0\5\d\r\1\e\4\g\v\c\h\q\w\h\i\4\i\g\h\h\s\z\l\5\w\8\c\p\4\3\d\z\n\v\i\b\g\a\t\u\o\a\i\i\n\8\g\1\b\j\5\1\f\z\p\t\l\z\2\9\s\g\p\f\t\2\o\c\m\i\7\v\u\s\k\a\2\2\g\m\b\1\1\h\1\s\m\c\r\r\e\v\k\w\f\9\i\y\u\h\b\b\g\u\f\e\c\7\0\z\h\8\f\n\o\f\8\6\3\5\f\7\e\7\f\g\q\p\2\t\n\o\t\a\g\c\h\2\h\u\i\3\w\0\f\4\k\9\1\2\m\7\b\i\8\y\t\u\m\t\p\9\8\l\4\u\t\v\e\g\9\y\2\i\x\9\i\7\w\r\6\8\l\r\e\2\k\y\3\i\y\h\q\9\z\n\g\y\q\3\z\b\5\5\k\x\0\6\d\g\7\4\h\n\j\n\w\a\g\a\9\z\n\x\u\s\i\x\o\2\s\6\d\6\x\b\a\0\q\2\h\4\k\g\a\m\v\x\s\k\x\2\k\x\3\q\m\k\9\j\j\m\v\6\h\d\m\a\p ]] 00:06:33.897 20:42:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:33.897 20:42:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:33.897 20:42:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:33.897 20:42:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:33.897 20:42:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:33.897 20:42:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:33.897 [2024-07-15 20:42:55.641185] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:33.897 [2024-07-15 20:42:55.641264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63439 ] 00:06:33.897 [2024-07-15 20:42:55.779485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.156 [2024-07-15 20:42:55.869665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.156 [2024-07-15 20:42:55.910532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.414  Copying: 512/512 [B] (average 500 kBps) 00:06:34.414 00:06:34.414 20:42:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jbk5xtsuknw559vqb6dc5wtl6w5fy4mk9pyod96l94ki6wehf68dmd9ey6p653wuomibiq3jjdm4cjfh7rm4xbmp50tcofwb94dklhvbtifiyxngq5n8szn03ycxpqfp1j90zwkqjlaceb2gxe248nh49nghxavsiztd19uv2pk9x1dlmwl48zb47xzzknfbwm8qprusdv8yiypv5wqqx3uqxrombdd1w33cbvdg9ae7fe0qk8usl96wbtvuku940uguo5hc97d278qvz6exwgiu09ajo3oik3gb4jm9u5r3c8a0bnk0v2bxw638weyzhtyvm9728fb30t5gbgvcccqwlsb2hmaxv4fhoo5ysu0tqqas814pyho5vjonczxw0xzt2pm6bwnjfwz1ygpqli3bzuk3ncjx9exy8ki24pr0u8e9suf85toj1c1dtq45rogopsh51lhztl5u5s5le7hd5qtzs08gg7r18tgfb1ueetkh7b4a1llvrmll829b == \j\b\k\5\x\t\s\u\k\n\w\5\5\9\v\q\b\6\d\c\5\w\t\l\6\w\5\f\y\4\m\k\9\p\y\o\d\9\6\l\9\4\k\i\6\w\e\h\f\6\8\d\m\d\9\e\y\6\p\6\5\3\w\u\o\m\i\b\i\q\3\j\j\d\m\4\c\j\f\h\7\r\m\4\x\b\m\p\5\0\t\c\o\f\w\b\9\4\d\k\l\h\v\b\t\i\f\i\y\x\n\g\q\5\n\8\s\z\n\0\3\y\c\x\p\q\f\p\1\j\9\0\z\w\k\q\j\l\a\c\e\b\2\g\x\e\2\4\8\n\h\4\9\n\g\h\x\a\v\s\i\z\t\d\1\9\u\v\2\p\k\9\x\1\d\l\m\w\l\4\8\z\b\4\7\x\z\z\k\n\f\b\w\m\8\q\p\r\u\s\d\v\8\y\i\y\p\v\5\w\q\q\x\3\u\q\x\r\o\m\b\d\d\1\w\3\3\c\b\v\d\g\9\a\e\7\f\e\0\q\k\8\u\s\l\9\6\w\b\t\v\u\k\u\9\4\0\u\g\u\o\5\h\c\9\7\d\2\7\8\q\v\z\6\e\x\w\g\i\u\0\9\a\j\o\3\o\i\k\3\g\b\4\j\m\9\u\5\r\3\c\8\a\0\b\n\k\0\v\2\b\x\w\6\3\8\w\e\y\z\h\t\y\v\m\9\7\2\8\f\b\3\0\t\5\g\b\g\v\c\c\c\q\w\l\s\b\2\h\m\a\x\v\4\f\h\o\o\5\y\s\u\0\t\q\q\a\s\8\1\4\p\y\h\o\5\v\j\o\n\c\z\x\w\0\x\z\t\2\p\m\6\b\w\n\j\f\w\z\1\y\g\p\q\l\i\3\b\z\u\k\3\n\c\j\x\9\e\x\y\8\k\i\2\4\p\r\0\u\8\e\9\s\u\f\8\5\t\o\j\1\c\1\d\t\q\4\5\r\o\g\o\p\s\h\5\1\l\h\z\t\l\5\u\5\s\5\l\e\7\h\d\5\q\t\z\s\0\8\g\g\7\r\1\8\t\g\f\b\1\u\e\e\t\k\h\7\b\4\a\1\l\l\v\r\m\l\l\8\2\9\b ]] 00:06:34.414 20:42:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:34.414 20:42:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:34.414 [2024-07-15 20:42:56.176780] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:34.414 [2024-07-15 20:42:56.176841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63452 ] 00:06:34.414 [2024-07-15 20:42:56.317476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.673 [2024-07-15 20:42:56.403560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.673 [2024-07-15 20:42:56.446434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.931  Copying: 512/512 [B] (average 500 kBps) 00:06:34.931 00:06:34.931 20:42:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jbk5xtsuknw559vqb6dc5wtl6w5fy4mk9pyod96l94ki6wehf68dmd9ey6p653wuomibiq3jjdm4cjfh7rm4xbmp50tcofwb94dklhvbtifiyxngq5n8szn03ycxpqfp1j90zwkqjlaceb2gxe248nh49nghxavsiztd19uv2pk9x1dlmwl48zb47xzzknfbwm8qprusdv8yiypv5wqqx3uqxrombdd1w33cbvdg9ae7fe0qk8usl96wbtvuku940uguo5hc97d278qvz6exwgiu09ajo3oik3gb4jm9u5r3c8a0bnk0v2bxw638weyzhtyvm9728fb30t5gbgvcccqwlsb2hmaxv4fhoo5ysu0tqqas814pyho5vjonczxw0xzt2pm6bwnjfwz1ygpqli3bzuk3ncjx9exy8ki24pr0u8e9suf85toj1c1dtq45rogopsh51lhztl5u5s5le7hd5qtzs08gg7r18tgfb1ueetkh7b4a1llvrmll829b == \j\b\k\5\x\t\s\u\k\n\w\5\5\9\v\q\b\6\d\c\5\w\t\l\6\w\5\f\y\4\m\k\9\p\y\o\d\9\6\l\9\4\k\i\6\w\e\h\f\6\8\d\m\d\9\e\y\6\p\6\5\3\w\u\o\m\i\b\i\q\3\j\j\d\m\4\c\j\f\h\7\r\m\4\x\b\m\p\5\0\t\c\o\f\w\b\9\4\d\k\l\h\v\b\t\i\f\i\y\x\n\g\q\5\n\8\s\z\n\0\3\y\c\x\p\q\f\p\1\j\9\0\z\w\k\q\j\l\a\c\e\b\2\g\x\e\2\4\8\n\h\4\9\n\g\h\x\a\v\s\i\z\t\d\1\9\u\v\2\p\k\9\x\1\d\l\m\w\l\4\8\z\b\4\7\x\z\z\k\n\f\b\w\m\8\q\p\r\u\s\d\v\8\y\i\y\p\v\5\w\q\q\x\3\u\q\x\r\o\m\b\d\d\1\w\3\3\c\b\v\d\g\9\a\e\7\f\e\0\q\k\8\u\s\l\9\6\w\b\t\v\u\k\u\9\4\0\u\g\u\o\5\h\c\9\7\d\2\7\8\q\v\z\6\e\x\w\g\i\u\0\9\a\j\o\3\o\i\k\3\g\b\4\j\m\9\u\5\r\3\c\8\a\0\b\n\k\0\v\2\b\x\w\6\3\8\w\e\y\z\h\t\y\v\m\9\7\2\8\f\b\3\0\t\5\g\b\g\v\c\c\c\q\w\l\s\b\2\h\m\a\x\v\4\f\h\o\o\5\y\s\u\0\t\q\q\a\s\8\1\4\p\y\h\o\5\v\j\o\n\c\z\x\w\0\x\z\t\2\p\m\6\b\w\n\j\f\w\z\1\y\g\p\q\l\i\3\b\z\u\k\3\n\c\j\x\9\e\x\y\8\k\i\2\4\p\r\0\u\8\e\9\s\u\f\8\5\t\o\j\1\c\1\d\t\q\4\5\r\o\g\o\p\s\h\5\1\l\h\z\t\l\5\u\5\s\5\l\e\7\h\d\5\q\t\z\s\0\8\g\g\7\r\1\8\t\g\f\b\1\u\e\e\t\k\h\7\b\4\a\1\l\l\v\r\m\l\l\8\2\9\b ]] 00:06:34.932 20:42:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:34.932 20:42:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:34.932 [2024-07-15 20:42:56.695973] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:34.932 [2024-07-15 20:42:56.696044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63454 ] 00:06:34.932 [2024-07-15 20:42:56.834988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.190 [2024-07-15 20:42:56.924765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.190 [2024-07-15 20:42:56.965829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.449  Copying: 512/512 [B] (average 250 kBps) 00:06:35.449 00:06:35.449 20:42:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jbk5xtsuknw559vqb6dc5wtl6w5fy4mk9pyod96l94ki6wehf68dmd9ey6p653wuomibiq3jjdm4cjfh7rm4xbmp50tcofwb94dklhvbtifiyxngq5n8szn03ycxpqfp1j90zwkqjlaceb2gxe248nh49nghxavsiztd19uv2pk9x1dlmwl48zb47xzzknfbwm8qprusdv8yiypv5wqqx3uqxrombdd1w33cbvdg9ae7fe0qk8usl96wbtvuku940uguo5hc97d278qvz6exwgiu09ajo3oik3gb4jm9u5r3c8a0bnk0v2bxw638weyzhtyvm9728fb30t5gbgvcccqwlsb2hmaxv4fhoo5ysu0tqqas814pyho5vjonczxw0xzt2pm6bwnjfwz1ygpqli3bzuk3ncjx9exy8ki24pr0u8e9suf85toj1c1dtq45rogopsh51lhztl5u5s5le7hd5qtzs08gg7r18tgfb1ueetkh7b4a1llvrmll829b == \j\b\k\5\x\t\s\u\k\n\w\5\5\9\v\q\b\6\d\c\5\w\t\l\6\w\5\f\y\4\m\k\9\p\y\o\d\9\6\l\9\4\k\i\6\w\e\h\f\6\8\d\m\d\9\e\y\6\p\6\5\3\w\u\o\m\i\b\i\q\3\j\j\d\m\4\c\j\f\h\7\r\m\4\x\b\m\p\5\0\t\c\o\f\w\b\9\4\d\k\l\h\v\b\t\i\f\i\y\x\n\g\q\5\n\8\s\z\n\0\3\y\c\x\p\q\f\p\1\j\9\0\z\w\k\q\j\l\a\c\e\b\2\g\x\e\2\4\8\n\h\4\9\n\g\h\x\a\v\s\i\z\t\d\1\9\u\v\2\p\k\9\x\1\d\l\m\w\l\4\8\z\b\4\7\x\z\z\k\n\f\b\w\m\8\q\p\r\u\s\d\v\8\y\i\y\p\v\5\w\q\q\x\3\u\q\x\r\o\m\b\d\d\1\w\3\3\c\b\v\d\g\9\a\e\7\f\e\0\q\k\8\u\s\l\9\6\w\b\t\v\u\k\u\9\4\0\u\g\u\o\5\h\c\9\7\d\2\7\8\q\v\z\6\e\x\w\g\i\u\0\9\a\j\o\3\o\i\k\3\g\b\4\j\m\9\u\5\r\3\c\8\a\0\b\n\k\0\v\2\b\x\w\6\3\8\w\e\y\z\h\t\y\v\m\9\7\2\8\f\b\3\0\t\5\g\b\g\v\c\c\c\q\w\l\s\b\2\h\m\a\x\v\4\f\h\o\o\5\y\s\u\0\t\q\q\a\s\8\1\4\p\y\h\o\5\v\j\o\n\c\z\x\w\0\x\z\t\2\p\m\6\b\w\n\j\f\w\z\1\y\g\p\q\l\i\3\b\z\u\k\3\n\c\j\x\9\e\x\y\8\k\i\2\4\p\r\0\u\8\e\9\s\u\f\8\5\t\o\j\1\c\1\d\t\q\4\5\r\o\g\o\p\s\h\5\1\l\h\z\t\l\5\u\5\s\5\l\e\7\h\d\5\q\t\z\s\0\8\g\g\7\r\1\8\t\g\f\b\1\u\e\e\t\k\h\7\b\4\a\1\l\l\v\r\m\l\l\8\2\9\b ]] 00:06:35.449 20:42:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:35.449 20:42:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:35.449 [2024-07-15 20:42:57.232129] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:35.449 [2024-07-15 20:42:57.232402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63467 ] 00:06:35.708 [2024-07-15 20:42:57.382104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.708 [2024-07-15 20:42:57.474142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.708 [2024-07-15 20:42:57.515191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.967  Copying: 512/512 [B] (average 500 kBps) 00:06:35.967 00:06:35.967 ************************************ 00:06:35.967 END TEST dd_flags_misc_forced_aio 00:06:35.967 ************************************ 00:06:35.967 20:42:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jbk5xtsuknw559vqb6dc5wtl6w5fy4mk9pyod96l94ki6wehf68dmd9ey6p653wuomibiq3jjdm4cjfh7rm4xbmp50tcofwb94dklhvbtifiyxngq5n8szn03ycxpqfp1j90zwkqjlaceb2gxe248nh49nghxavsiztd19uv2pk9x1dlmwl48zb47xzzknfbwm8qprusdv8yiypv5wqqx3uqxrombdd1w33cbvdg9ae7fe0qk8usl96wbtvuku940uguo5hc97d278qvz6exwgiu09ajo3oik3gb4jm9u5r3c8a0bnk0v2bxw638weyzhtyvm9728fb30t5gbgvcccqwlsb2hmaxv4fhoo5ysu0tqqas814pyho5vjonczxw0xzt2pm6bwnjfwz1ygpqli3bzuk3ncjx9exy8ki24pr0u8e9suf85toj1c1dtq45rogopsh51lhztl5u5s5le7hd5qtzs08gg7r18tgfb1ueetkh7b4a1llvrmll829b == \j\b\k\5\x\t\s\u\k\n\w\5\5\9\v\q\b\6\d\c\5\w\t\l\6\w\5\f\y\4\m\k\9\p\y\o\d\9\6\l\9\4\k\i\6\w\e\h\f\6\8\d\m\d\9\e\y\6\p\6\5\3\w\u\o\m\i\b\i\q\3\j\j\d\m\4\c\j\f\h\7\r\m\4\x\b\m\p\5\0\t\c\o\f\w\b\9\4\d\k\l\h\v\b\t\i\f\i\y\x\n\g\q\5\n\8\s\z\n\0\3\y\c\x\p\q\f\p\1\j\9\0\z\w\k\q\j\l\a\c\e\b\2\g\x\e\2\4\8\n\h\4\9\n\g\h\x\a\v\s\i\z\t\d\1\9\u\v\2\p\k\9\x\1\d\l\m\w\l\4\8\z\b\4\7\x\z\z\k\n\f\b\w\m\8\q\p\r\u\s\d\v\8\y\i\y\p\v\5\w\q\q\x\3\u\q\x\r\o\m\b\d\d\1\w\3\3\c\b\v\d\g\9\a\e\7\f\e\0\q\k\8\u\s\l\9\6\w\b\t\v\u\k\u\9\4\0\u\g\u\o\5\h\c\9\7\d\2\7\8\q\v\z\6\e\x\w\g\i\u\0\9\a\j\o\3\o\i\k\3\g\b\4\j\m\9\u\5\r\3\c\8\a\0\b\n\k\0\v\2\b\x\w\6\3\8\w\e\y\z\h\t\y\v\m\9\7\2\8\f\b\3\0\t\5\g\b\g\v\c\c\c\q\w\l\s\b\2\h\m\a\x\v\4\f\h\o\o\5\y\s\u\0\t\q\q\a\s\8\1\4\p\y\h\o\5\v\j\o\n\c\z\x\w\0\x\z\t\2\p\m\6\b\w\n\j\f\w\z\1\y\g\p\q\l\i\3\b\z\u\k\3\n\c\j\x\9\e\x\y\8\k\i\2\4\p\r\0\u\8\e\9\s\u\f\8\5\t\o\j\1\c\1\d\t\q\4\5\r\o\g\o\p\s\h\5\1\l\h\z\t\l\5\u\5\s\5\l\e\7\h\d\5\q\t\z\s\0\8\g\g\7\r\1\8\t\g\f\b\1\u\e\e\t\k\h\7\b\4\a\1\l\l\v\r\m\l\l\8\2\9\b ]] 00:06:35.967 00:06:35.967 real 0m4.325s 00:06:35.967 user 0m2.371s 00:06:35.967 sys 0m0.968s 00:06:35.967 20:42:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.967 20:42:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:35.967 20:42:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:35.967 20:42:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:35.967 20:42:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:35.967 20:42:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:35.967 ************************************ 00:06:35.967 END TEST spdk_dd_posix 00:06:35.967 ************************************ 00:06:35.967 00:06:35.967 real 0m19.736s 00:06:35.967 user 0m9.590s 00:06:35.967 sys 0m5.777s 00:06:35.967 20:42:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.967 20:42:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:35.967 20:42:57 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:35.967 20:42:57 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:35.967 20:42:57 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.967 20:42:57 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.967 20:42:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:35.967 ************************************ 00:06:35.967 START TEST spdk_dd_malloc 00:06:35.967 ************************************ 00:06:35.967 20:42:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:36.225 * Looking for test storage... 00:06:36.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:36.225 ************************************ 00:06:36.225 START TEST dd_malloc_copy 00:06:36.225 ************************************ 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:36.225 20:42:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.225 [2024-07-15 20:42:58.022439] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:36.225 [2024-07-15 20:42:58.022689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63536 ] 00:06:36.225 { 00:06:36.225 "subsystems": [ 00:06:36.225 { 00:06:36.225 "subsystem": "bdev", 00:06:36.225 "config": [ 00:06:36.225 { 00:06:36.225 "params": { 00:06:36.225 "block_size": 512, 00:06:36.225 "num_blocks": 1048576, 00:06:36.225 "name": "malloc0" 00:06:36.225 }, 00:06:36.225 "method": "bdev_malloc_create" 00:06:36.225 }, 00:06:36.225 { 00:06:36.225 "params": { 00:06:36.225 "block_size": 512, 00:06:36.225 "num_blocks": 1048576, 00:06:36.225 "name": "malloc1" 00:06:36.225 }, 00:06:36.225 "method": "bdev_malloc_create" 00:06:36.225 }, 00:06:36.225 { 00:06:36.225 "method": "bdev_wait_for_examine" 00:06:36.225 } 00:06:36.225 ] 00:06:36.225 } 00:06:36.225 ] 00:06:36.225 } 00:06:36.482 [2024-07-15 20:42:58.169018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.482 [2024-07-15 20:42:58.258634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.482 [2024-07-15 20:42:58.300392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.371  Copying: 259/512 [MB] (259 MBps) Copying: 512/512 [MB] (average 259 MBps) 00:06:39.371 00:06:39.371 20:43:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:39.371 20:43:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:39.371 20:43:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:39.371 20:43:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:39.371 [2024-07-15 20:43:01.061896] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:39.371 [2024-07-15 20:43:01.062108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63578 ] 00:06:39.371 { 00:06:39.371 "subsystems": [ 00:06:39.371 { 00:06:39.371 "subsystem": "bdev", 00:06:39.371 "config": [ 00:06:39.371 { 00:06:39.371 "params": { 00:06:39.371 "block_size": 512, 00:06:39.371 "num_blocks": 1048576, 00:06:39.371 "name": "malloc0" 00:06:39.371 }, 00:06:39.371 "method": "bdev_malloc_create" 00:06:39.371 }, 00:06:39.371 { 00:06:39.371 "params": { 00:06:39.371 "block_size": 512, 00:06:39.371 "num_blocks": 1048576, 00:06:39.371 "name": "malloc1" 00:06:39.371 }, 00:06:39.371 "method": "bdev_malloc_create" 00:06:39.371 }, 00:06:39.371 { 00:06:39.371 "method": "bdev_wait_for_examine" 00:06:39.371 } 00:06:39.371 ] 00:06:39.371 } 00:06:39.371 ] 00:06:39.371 } 00:06:39.371 [2024-07-15 20:43:01.203599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.629 [2024-07-15 20:43:01.293774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.629 [2024-07-15 20:43:01.335591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.198  Copying: 259/512 [MB] (259 MBps) Copying: 512/512 [MB] (average 260 MBps) 00:06:42.199 00:06:42.199 00:06:42.199 real 0m6.073s 00:06:42.199 user 0m5.254s 00:06:42.199 sys 0m0.674s 00:06:42.199 20:43:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.199 20:43:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:42.199 ************************************ 00:06:42.199 END TEST dd_malloc_copy 00:06:42.199 ************************************ 00:06:42.199 20:43:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:06:42.199 00:06:42.199 real 0m6.247s 00:06:42.199 user 0m5.313s 00:06:42.199 sys 0m0.792s 00:06:42.199 ************************************ 00:06:42.199 END TEST spdk_dd_malloc 00:06:42.199 ************************************ 00:06:42.199 20:43:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.199 20:43:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:42.458 20:43:04 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:42.458 20:43:04 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:42.458 20:43:04 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:42.458 20:43:04 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.458 20:43:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:42.458 ************************************ 00:06:42.458 START TEST spdk_dd_bdev_to_bdev 00:06:42.458 ************************************ 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:42.458 * Looking for test storage... 00:06:42.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:42.458 ************************************ 00:06:42.458 START TEST dd_inflate_file 00:06:42.458 ************************************ 00:06:42.458 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:42.717 [2024-07-15 20:43:04.375633] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:42.717 [2024-07-15 20:43:04.375711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63677 ] 00:06:42.717 [2024-07-15 20:43:04.517624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.717 [2024-07-15 20:43:04.617130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.976 [2024-07-15 20:43:04.658183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.235  Copying: 64/64 [MB] (average 1488 MBps) 00:06:43.235 00:06:43.235 00:06:43.235 real 0m0.578s 00:06:43.235 user 0m0.352s 00:06:43.235 sys 0m0.266s 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:43.235 ************************************ 00:06:43.235 END TEST dd_inflate_file 00:06:43.235 ************************************ 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:43.235 ************************************ 00:06:43.235 START TEST dd_copy_to_out_bdev 00:06:43.235 ************************************ 00:06:43.235 20:43:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:43.235 { 00:06:43.235 "subsystems": [ 00:06:43.235 { 00:06:43.235 "subsystem": "bdev", 00:06:43.235 "config": [ 00:06:43.235 { 00:06:43.235 "params": { 00:06:43.235 "trtype": "pcie", 00:06:43.235 "traddr": "0000:00:10.0", 00:06:43.235 "name": "Nvme0" 00:06:43.235 }, 00:06:43.235 "method": "bdev_nvme_attach_controller" 00:06:43.235 }, 00:06:43.235 { 00:06:43.235 "params": { 00:06:43.235 "trtype": "pcie", 00:06:43.235 "traddr": "0000:00:11.0", 00:06:43.235 "name": "Nvme1" 00:06:43.235 }, 00:06:43.235 "method": "bdev_nvme_attach_controller" 00:06:43.235 }, 00:06:43.235 { 00:06:43.235 "method": "bdev_wait_for_examine" 00:06:43.235 } 00:06:43.235 ] 00:06:43.235 } 00:06:43.235 ] 00:06:43.235 } 00:06:43.235 [2024-07-15 20:43:05.036943] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:43.235 [2024-07-15 20:43:05.037022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63710 ] 00:06:43.493 [2024-07-15 20:43:05.178383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.493 [2024-07-15 20:43:05.266353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.493 [2024-07-15 20:43:05.307884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.067  Copying: 49/64 [MB] (49 MBps) Copying: 64/64 [MB] (average 50 MBps) 00:06:45.067 00:06:45.325 00:06:45.325 real 0m1.992s 00:06:45.325 user 0m1.775s 00:06:45.325 sys 0m1.607s 00:06:45.325 ************************************ 00:06:45.325 END TEST dd_copy_to_out_bdev 00:06:45.325 ************************************ 00:06:45.325 20:43:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.325 20:43:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:45.325 ************************************ 00:06:45.325 START TEST dd_offset_magic 00:06:45.325 ************************************ 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:45.325 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:45.325 [2024-07-15 20:43:07.101534] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:45.325 [2024-07-15 20:43:07.101603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63755 ] 00:06:45.325 { 00:06:45.325 "subsystems": [ 00:06:45.325 { 00:06:45.325 "subsystem": "bdev", 00:06:45.325 "config": [ 00:06:45.325 { 00:06:45.325 "params": { 00:06:45.325 "trtype": "pcie", 00:06:45.325 "traddr": "0000:00:10.0", 00:06:45.325 "name": "Nvme0" 00:06:45.325 }, 00:06:45.325 "method": "bdev_nvme_attach_controller" 00:06:45.325 }, 00:06:45.325 { 00:06:45.325 "params": { 00:06:45.325 "trtype": "pcie", 00:06:45.325 "traddr": "0000:00:11.0", 00:06:45.325 "name": "Nvme1" 00:06:45.325 }, 00:06:45.325 "method": "bdev_nvme_attach_controller" 00:06:45.325 }, 00:06:45.325 { 00:06:45.325 "method": "bdev_wait_for_examine" 00:06:45.325 } 00:06:45.325 ] 00:06:45.325 } 00:06:45.325 ] 00:06:45.325 } 00:06:45.583 [2024-07-15 20:43:07.241662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.583 [2024-07-15 20:43:07.331266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.583 [2024-07-15 20:43:07.372889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.100  Copying: 65/65 [MB] (average 698 MBps) 00:06:46.100 00:06:46.100 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:46.100 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:46.100 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:46.100 20:43:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:46.100 [2024-07-15 20:43:07.904108] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:46.100 [2024-07-15 20:43:07.904181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63775 ] 00:06:46.100 { 00:06:46.100 "subsystems": [ 00:06:46.100 { 00:06:46.100 "subsystem": "bdev", 00:06:46.100 "config": [ 00:06:46.100 { 00:06:46.100 "params": { 00:06:46.100 "trtype": "pcie", 00:06:46.100 "traddr": "0000:00:10.0", 00:06:46.100 "name": "Nvme0" 00:06:46.100 }, 00:06:46.100 "method": "bdev_nvme_attach_controller" 00:06:46.100 }, 00:06:46.100 { 00:06:46.100 "params": { 00:06:46.100 "trtype": "pcie", 00:06:46.100 "traddr": "0000:00:11.0", 00:06:46.100 "name": "Nvme1" 00:06:46.100 }, 00:06:46.100 "method": "bdev_nvme_attach_controller" 00:06:46.100 }, 00:06:46.100 { 00:06:46.100 "method": "bdev_wait_for_examine" 00:06:46.100 } 00:06:46.100 ] 00:06:46.100 } 00:06:46.100 ] 00:06:46.100 } 00:06:46.357 [2024-07-15 20:43:08.044716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.357 [2024-07-15 20:43:08.134970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.357 [2024-07-15 20:43:08.176384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.615  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:46.615 00:06:46.615 20:43:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:46.615 20:43:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:46.615 20:43:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:46.615 20:43:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:46.615 20:43:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:46.615 20:43:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:46.615 20:43:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:46.875 [2024-07-15 20:43:08.570191] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:46.875 [2024-07-15 20:43:08.570257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63792 ] 00:06:46.875 { 00:06:46.875 "subsystems": [ 00:06:46.875 { 00:06:46.875 "subsystem": "bdev", 00:06:46.875 "config": [ 00:06:46.875 { 00:06:46.875 "params": { 00:06:46.875 "trtype": "pcie", 00:06:46.875 "traddr": "0000:00:10.0", 00:06:46.875 "name": "Nvme0" 00:06:46.875 }, 00:06:46.875 "method": "bdev_nvme_attach_controller" 00:06:46.875 }, 00:06:46.875 { 00:06:46.875 "params": { 00:06:46.875 "trtype": "pcie", 00:06:46.875 "traddr": "0000:00:11.0", 00:06:46.875 "name": "Nvme1" 00:06:46.875 }, 00:06:46.875 "method": "bdev_nvme_attach_controller" 00:06:46.875 }, 00:06:46.875 { 00:06:46.875 "method": "bdev_wait_for_examine" 00:06:46.875 } 00:06:46.875 ] 00:06:46.875 } 00:06:46.875 ] 00:06:46.875 } 00:06:46.875 [2024-07-15 20:43:08.711080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.172 [2024-07-15 20:43:08.803501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.172 [2024-07-15 20:43:08.844935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.431  Copying: 65/65 [MB] (average 783 MBps) 00:06:47.431 00:06:47.431 20:43:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:47.431 20:43:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:47.431 20:43:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:47.431 20:43:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:47.690 [2024-07-15 20:43:09.373822] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:47.690 [2024-07-15 20:43:09.373886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63814 ] 00:06:47.690 { 00:06:47.690 "subsystems": [ 00:06:47.690 { 00:06:47.690 "subsystem": "bdev", 00:06:47.690 "config": [ 00:06:47.690 { 00:06:47.690 "params": { 00:06:47.690 "trtype": "pcie", 00:06:47.690 "traddr": "0000:00:10.0", 00:06:47.690 "name": "Nvme0" 00:06:47.690 }, 00:06:47.690 "method": "bdev_nvme_attach_controller" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "params": { 00:06:47.690 "trtype": "pcie", 00:06:47.690 "traddr": "0000:00:11.0", 00:06:47.690 "name": "Nvme1" 00:06:47.690 }, 00:06:47.690 "method": "bdev_nvme_attach_controller" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "method": "bdev_wait_for_examine" 00:06:47.690 } 00:06:47.690 ] 00:06:47.690 } 00:06:47.690 ] 00:06:47.690 } 00:06:47.690 [2024-07-15 20:43:09.515460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.949 [2024-07-15 20:43:09.599803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.949 [2024-07-15 20:43:09.641215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.207  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:48.207 00:06:48.207 ************************************ 00:06:48.207 END TEST dd_offset_magic 00:06:48.207 ************************************ 00:06:48.207 20:43:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:48.207 20:43:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:48.207 00:06:48.207 real 0m2.941s 00:06:48.207 user 0m2.155s 00:06:48.207 sys 0m0.823s 00:06:48.207 20:43:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.207 20:43:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:48.207 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:48.207 [2024-07-15 20:43:10.105346] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:48.207 [2024-07-15 20:43:10.105422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63845 ] 00:06:48.207 { 00:06:48.207 "subsystems": [ 00:06:48.207 { 00:06:48.207 "subsystem": "bdev", 00:06:48.207 "config": [ 00:06:48.207 { 00:06:48.207 "params": { 00:06:48.207 "trtype": "pcie", 00:06:48.207 "traddr": "0000:00:10.0", 00:06:48.207 "name": "Nvme0" 00:06:48.207 }, 00:06:48.207 "method": "bdev_nvme_attach_controller" 00:06:48.207 }, 00:06:48.207 { 00:06:48.207 "params": { 00:06:48.207 "trtype": "pcie", 00:06:48.207 "traddr": "0000:00:11.0", 00:06:48.207 "name": "Nvme1" 00:06:48.207 }, 00:06:48.207 "method": "bdev_nvme_attach_controller" 00:06:48.207 }, 00:06:48.207 { 00:06:48.207 "method": "bdev_wait_for_examine" 00:06:48.207 } 00:06:48.207 ] 00:06:48.207 } 00:06:48.207 ] 00:06:48.207 } 00:06:48.466 [2024-07-15 20:43:10.246994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.466 [2024-07-15 20:43:10.345597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.724 [2024-07-15 20:43:10.387355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.983  Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:48.983 00:06:48.983 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:48.983 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:48.983 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:48.983 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:48.983 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:48.983 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:48.983 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:48.983 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:48.983 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:48.983 20:43:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:48.983 [2024-07-15 20:43:10.789095] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:48.983 [2024-07-15 20:43:10.789159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63861 ] 00:06:48.983 { 00:06:48.983 "subsystems": [ 00:06:48.983 { 00:06:48.983 "subsystem": "bdev", 00:06:48.983 "config": [ 00:06:48.983 { 00:06:48.983 "params": { 00:06:48.983 "trtype": "pcie", 00:06:48.983 "traddr": "0000:00:10.0", 00:06:48.983 "name": "Nvme0" 00:06:48.983 }, 00:06:48.983 "method": "bdev_nvme_attach_controller" 00:06:48.983 }, 00:06:48.983 { 00:06:48.983 "params": { 00:06:48.983 "trtype": "pcie", 00:06:48.983 "traddr": "0000:00:11.0", 00:06:48.983 "name": "Nvme1" 00:06:48.983 }, 00:06:48.983 "method": "bdev_nvme_attach_controller" 00:06:48.983 }, 00:06:48.983 { 00:06:48.983 "method": "bdev_wait_for_examine" 00:06:48.983 } 00:06:48.983 ] 00:06:48.983 } 00:06:48.983 ] 00:06:48.983 } 00:06:49.241 [2024-07-15 20:43:10.929735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.241 [2024-07-15 20:43:11.029832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.241 [2024-07-15 20:43:11.071514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.759  Copying: 5120/5120 [kB] (average 714 MBps) 00:06:49.759 00:06:49.759 20:43:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:49.759 ************************************ 00:06:49.759 END TEST spdk_dd_bdev_to_bdev 00:06:49.759 ************************************ 00:06:49.759 00:06:49.759 real 0m7.274s 00:06:49.759 user 0m5.402s 00:06:49.759 sys 0m3.423s 00:06:49.759 20:43:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.759 20:43:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:49.759 20:43:11 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:49.759 20:43:11 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:49.759 20:43:11 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:49.759 20:43:11 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.759 20:43:11 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.759 20:43:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:49.759 ************************************ 00:06:49.759 START TEST spdk_dd_uring 00:06:49.759 ************************************ 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:49.759 * Looking for test storage... 00:06:49.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.759 20:43:11 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:50.017 ************************************ 00:06:50.017 START TEST dd_uring_copy 00:06:50.017 ************************************ 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:50.017 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:50.018 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=yo5yrt7dkzit11wofrrsmpb9lz5vpjhs17zyzd4u7ic225zcgkab4ub8pa18pj381ssfmpebnbjm1egb2o5r7ght6pc681rar2ixtpuiq3plbpfgmq80d781zzcfdi71gaug9e8bpw5lvr0ezz125t0k55t3fm54lwd5h769e7whq40nhwoodfmwziqhwyttoktuw4uko2erxm2g2sayl8e179jej4utms8s0u6oh9n3fd4ab4jshtax1v530bgydcphwz1r8nslbb7i7jlq2vqmzqg8jhkllm9p8awqt9yubfnu4j5eud735pwc9d9byjojtuz72i4p3l8rukw33dn8xv3btjn7ftj5s76l2ib32uwre07y8u1nbmzxo30onz5izy7gu4bl2h3zy3vtfrp24lsghqg6rhmccpqqx8lbyp3j3z1yheg767izm1bjhxlbby8n7d599kjqlj3z4l5goliyy79f2s905qaxoen52wrsvohpsqq53pfyp3var8ysg5i3q0t9rwj8oahxnci8hmx2f56yryzxv9o7d0jdt9ja1szoic9sr9g1pjrdtucu1noueusii2kphbbsoeb7lkttengomwg94vjllfg0zleqjnpzmvzenfwqi3d2tlk4qoxffph1y5d7m3ap6sip9ro2zas69tma202ib290dydjsxkn2tosmdyoywbkuc33fe42hzaqorsk6ik8kv5uw5wywy4yh4zl11bwkk8mdoz0px87n16ggpeebnypori2kr0g61ttnggc33tbtkgfertk9vjib1rnxf29aub53tqssutk47fi0wwy53gp7qdwpk0f5672raa0id5fvzj6s010qs68hr3xaflrg3txsfdgrba2lfa9g1olh6qvplmi24ou1bghxio8fofqp4iwyp0tw2ipd16y9qajdlq7wqo5zzqj1ml23pjlyj6tglsn3zrnob1wk48oyjv8g5g1l6mvji87n8a1dm91ccjo3jigupiekheqpu4d54is 00:06:50.018 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo yo5yrt7dkzit11wofrrsmpb9lz5vpjhs17zyzd4u7ic225zcgkab4ub8pa18pj381ssfmpebnbjm1egb2o5r7ght6pc681rar2ixtpuiq3plbpfgmq80d781zzcfdi71gaug9e8bpw5lvr0ezz125t0k55t3fm54lwd5h769e7whq40nhwoodfmwziqhwyttoktuw4uko2erxm2g2sayl8e179jej4utms8s0u6oh9n3fd4ab4jshtax1v530bgydcphwz1r8nslbb7i7jlq2vqmzqg8jhkllm9p8awqt9yubfnu4j5eud735pwc9d9byjojtuz72i4p3l8rukw33dn8xv3btjn7ftj5s76l2ib32uwre07y8u1nbmzxo30onz5izy7gu4bl2h3zy3vtfrp24lsghqg6rhmccpqqx8lbyp3j3z1yheg767izm1bjhxlbby8n7d599kjqlj3z4l5goliyy79f2s905qaxoen52wrsvohpsqq53pfyp3var8ysg5i3q0t9rwj8oahxnci8hmx2f56yryzxv9o7d0jdt9ja1szoic9sr9g1pjrdtucu1noueusii2kphbbsoeb7lkttengomwg94vjllfg0zleqjnpzmvzenfwqi3d2tlk4qoxffph1y5d7m3ap6sip9ro2zas69tma202ib290dydjsxkn2tosmdyoywbkuc33fe42hzaqorsk6ik8kv5uw5wywy4yh4zl11bwkk8mdoz0px87n16ggpeebnypori2kr0g61ttnggc33tbtkgfertk9vjib1rnxf29aub53tqssutk47fi0wwy53gp7qdwpk0f5672raa0id5fvzj6s010qs68hr3xaflrg3txsfdgrba2lfa9g1olh6qvplmi24ou1bghxio8fofqp4iwyp0tw2ipd16y9qajdlq7wqo5zzqj1ml23pjlyj6tglsn3zrnob1wk48oyjv8g5g1l6mvji87n8a1dm91ccjo3jigupiekheqpu4d54is 00:06:50.018 20:43:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:50.018 [2024-07-15 20:43:11.772770] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:50.018 [2024-07-15 20:43:11.772843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63931 ] 00:06:50.018 [2024-07-15 20:43:11.913764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.276 [2024-07-15 20:43:12.000751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.276 [2024-07-15 20:43:12.041921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.154  Copying: 511/511 [MB] (average 1414 MBps) 00:06:51.154 00:06:51.154 20:43:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:51.154 20:43:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:51.154 20:43:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:51.154 20:43:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:51.154 [2024-07-15 20:43:12.958902] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:51.154 [2024-07-15 20:43:12.959086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63947 ] 00:06:51.154 { 00:06:51.154 "subsystems": [ 00:06:51.154 { 00:06:51.154 "subsystem": "bdev", 00:06:51.154 "config": [ 00:06:51.154 { 00:06:51.154 "params": { 00:06:51.154 "block_size": 512, 00:06:51.154 "num_blocks": 1048576, 00:06:51.154 "name": "malloc0" 00:06:51.154 }, 00:06:51.154 "method": "bdev_malloc_create" 00:06:51.154 }, 00:06:51.154 { 00:06:51.154 "params": { 00:06:51.154 "filename": "/dev/zram1", 00:06:51.154 "name": "uring0" 00:06:51.154 }, 00:06:51.154 "method": "bdev_uring_create" 00:06:51.154 }, 00:06:51.154 { 00:06:51.154 "method": "bdev_wait_for_examine" 00:06:51.154 } 00:06:51.154 ] 00:06:51.154 } 00:06:51.154 ] 00:06:51.154 } 00:06:51.413 [2024-07-15 20:43:13.099429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.413 [2024-07-15 20:43:13.186043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.413 [2024-07-15 20:43:13.227006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.724  Copying: 268/512 [MB] (268 MBps) Copying: 512/512 [MB] (average 268 MBps) 00:06:53.724 00:06:53.724 20:43:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:53.724 20:43:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:53.724 20:43:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:53.724 20:43:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.983 { 00:06:53.983 "subsystems": [ 00:06:53.983 { 00:06:53.983 "subsystem": "bdev", 00:06:53.983 "config": [ 00:06:53.983 { 00:06:53.983 "params": { 00:06:53.983 "block_size": 512, 00:06:53.983 "num_blocks": 1048576, 00:06:53.983 "name": "malloc0" 00:06:53.983 }, 00:06:53.983 "method": "bdev_malloc_create" 00:06:53.983 }, 00:06:53.983 { 00:06:53.983 "params": { 00:06:53.983 "filename": "/dev/zram1", 00:06:53.983 "name": "uring0" 00:06:53.983 }, 00:06:53.983 "method": "bdev_uring_create" 00:06:53.983 }, 00:06:53.983 { 00:06:53.983 "method": "bdev_wait_for_examine" 00:06:53.983 } 00:06:53.983 ] 00:06:53.983 } 00:06:53.983 ] 00:06:53.983 } 00:06:53.983 [2024-07-15 20:43:15.665315] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:53.983 [2024-07-15 20:43:15.665384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63991 ] 00:06:53.983 [2024-07-15 20:43:15.804981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.983 [2024-07-15 20:43:15.885028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.241 [2024-07-15 20:43:15.926298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.073  Copying: 216/512 [MB] (216 MBps) Copying: 428/512 [MB] (211 MBps) Copying: 512/512 [MB] (average 209 MBps) 00:06:57.073 00:06:57.073 20:43:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:57.074 20:43:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ yo5yrt7dkzit11wofrrsmpb9lz5vpjhs17zyzd4u7ic225zcgkab4ub8pa18pj381ssfmpebnbjm1egb2o5r7ght6pc681rar2ixtpuiq3plbpfgmq80d781zzcfdi71gaug9e8bpw5lvr0ezz125t0k55t3fm54lwd5h769e7whq40nhwoodfmwziqhwyttoktuw4uko2erxm2g2sayl8e179jej4utms8s0u6oh9n3fd4ab4jshtax1v530bgydcphwz1r8nslbb7i7jlq2vqmzqg8jhkllm9p8awqt9yubfnu4j5eud735pwc9d9byjojtuz72i4p3l8rukw33dn8xv3btjn7ftj5s76l2ib32uwre07y8u1nbmzxo30onz5izy7gu4bl2h3zy3vtfrp24lsghqg6rhmccpqqx8lbyp3j3z1yheg767izm1bjhxlbby8n7d599kjqlj3z4l5goliyy79f2s905qaxoen52wrsvohpsqq53pfyp3var8ysg5i3q0t9rwj8oahxnci8hmx2f56yryzxv9o7d0jdt9ja1szoic9sr9g1pjrdtucu1noueusii2kphbbsoeb7lkttengomwg94vjllfg0zleqjnpzmvzenfwqi3d2tlk4qoxffph1y5d7m3ap6sip9ro2zas69tma202ib290dydjsxkn2tosmdyoywbkuc33fe42hzaqorsk6ik8kv5uw5wywy4yh4zl11bwkk8mdoz0px87n16ggpeebnypori2kr0g61ttnggc33tbtkgfertk9vjib1rnxf29aub53tqssutk47fi0wwy53gp7qdwpk0f5672raa0id5fvzj6s010qs68hr3xaflrg3txsfdgrba2lfa9g1olh6qvplmi24ou1bghxio8fofqp4iwyp0tw2ipd16y9qajdlq7wqo5zzqj1ml23pjlyj6tglsn3zrnob1wk48oyjv8g5g1l6mvji87n8a1dm91ccjo3jigupiekheqpu4d54is == \y\o\5\y\r\t\7\d\k\z\i\t\1\1\w\o\f\r\r\s\m\p\b\9\l\z\5\v\p\j\h\s\1\7\z\y\z\d\4\u\7\i\c\2\2\5\z\c\g\k\a\b\4\u\b\8\p\a\1\8\p\j\3\8\1\s\s\f\m\p\e\b\n\b\j\m\1\e\g\b\2\o\5\r\7\g\h\t\6\p\c\6\8\1\r\a\r\2\i\x\t\p\u\i\q\3\p\l\b\p\f\g\m\q\8\0\d\7\8\1\z\z\c\f\d\i\7\1\g\a\u\g\9\e\8\b\p\w\5\l\v\r\0\e\z\z\1\2\5\t\0\k\5\5\t\3\f\m\5\4\l\w\d\5\h\7\6\9\e\7\w\h\q\4\0\n\h\w\o\o\d\f\m\w\z\i\q\h\w\y\t\t\o\k\t\u\w\4\u\k\o\2\e\r\x\m\2\g\2\s\a\y\l\8\e\1\7\9\j\e\j\4\u\t\m\s\8\s\0\u\6\o\h\9\n\3\f\d\4\a\b\4\j\s\h\t\a\x\1\v\5\3\0\b\g\y\d\c\p\h\w\z\1\r\8\n\s\l\b\b\7\i\7\j\l\q\2\v\q\m\z\q\g\8\j\h\k\l\l\m\9\p\8\a\w\q\t\9\y\u\b\f\n\u\4\j\5\e\u\d\7\3\5\p\w\c\9\d\9\b\y\j\o\j\t\u\z\7\2\i\4\p\3\l\8\r\u\k\w\3\3\d\n\8\x\v\3\b\t\j\n\7\f\t\j\5\s\7\6\l\2\i\b\3\2\u\w\r\e\0\7\y\8\u\1\n\b\m\z\x\o\3\0\o\n\z\5\i\z\y\7\g\u\4\b\l\2\h\3\z\y\3\v\t\f\r\p\2\4\l\s\g\h\q\g\6\r\h\m\c\c\p\q\q\x\8\l\b\y\p\3\j\3\z\1\y\h\e\g\7\6\7\i\z\m\1\b\j\h\x\l\b\b\y\8\n\7\d\5\9\9\k\j\q\l\j\3\z\4\l\5\g\o\l\i\y\y\7\9\f\2\s\9\0\5\q\a\x\o\e\n\5\2\w\r\s\v\o\h\p\s\q\q\5\3\p\f\y\p\3\v\a\r\8\y\s\g\5\i\3\q\0\t\9\r\w\j\8\o\a\h\x\n\c\i\8\h\m\x\2\f\5\6\y\r\y\z\x\v\9\o\7\d\0\j\d\t\9\j\a\1\s\z\o\i\c\9\s\r\9\g\1\p\j\r\d\t\u\c\u\1\n\o\u\e\u\s\i\i\2\k\p\h\b\b\s\o\e\b\7\l\k\t\t\e\n\g\o\m\w\g\9\4\v\j\l\l\f\g\0\z\l\e\q\j\n\p\z\m\v\z\e\n\f\w\q\i\3\d\2\t\l\k\4\q\o\x\f\f\p\h\1\y\5\d\7\m\3\a\p\6\s\i\p\9\r\o\2\z\a\s\6\9\t\m\a\2\0\2\i\b\2\9\0\d\y\d\j\s\x\k\n\2\t\o\s\m\d\y\o\y\w\b\k\u\c\3\3\f\e\4\2\h\z\a\q\o\r\s\k\6\i\k\8\k\v\5\u\w\5\w\y\w\y\4\y\h\4\z\l\1\1\b\w\k\k\8\m\d\o\z\0\p\x\8\7\n\1\6\g\g\p\e\e\b\n\y\p\o\r\i\2\k\r\0\g\6\1\t\t\n\g\g\c\3\3\t\b\t\k\g\f\e\r\t\k\9\v\j\i\b\1\r\n\x\f\2\9\a\u\b\5\3\t\q\s\s\u\t\k\4\7\f\i\0\w\w\y\5\3\g\p\7\q\d\w\p\k\0\f\5\6\7\2\r\a\a\0\i\d\5\f\v\z\j\6\s\0\1\0\q\s\6\8\h\r\3\x\a\f\l\r\g\3\t\x\s\f\d\g\r\b\a\2\l\f\a\9\g\1\o\l\h\6\q\v\p\l\m\i\2\4\o\u\1\b\g\h\x\i\o\8\f\o\f\q\p\4\i\w\y\p\0\t\w\2\i\p\d\1\6\y\9\q\a\j\d\l\q\7\w\q\o\5\z\z\q\j\1\m\l\2\3\p\j\l\y\j\6\t\g\l\s\n\3\z\r\n\o\b\1\w\k\4\8\o\y\j\v\8\g\5\g\1\l\6\m\v\j\i\8\7\n\8\a\1\d\m\9\1\c\c\j\o\3\j\i\g\u\p\i\e\k\h\e\q\p\u\4\d\5\4\i\s ]] 00:06:57.074 20:43:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:57.074 20:43:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ yo5yrt7dkzit11wofrrsmpb9lz5vpjhs17zyzd4u7ic225zcgkab4ub8pa18pj381ssfmpebnbjm1egb2o5r7ght6pc681rar2ixtpuiq3plbpfgmq80d781zzcfdi71gaug9e8bpw5lvr0ezz125t0k55t3fm54lwd5h769e7whq40nhwoodfmwziqhwyttoktuw4uko2erxm2g2sayl8e179jej4utms8s0u6oh9n3fd4ab4jshtax1v530bgydcphwz1r8nslbb7i7jlq2vqmzqg8jhkllm9p8awqt9yubfnu4j5eud735pwc9d9byjojtuz72i4p3l8rukw33dn8xv3btjn7ftj5s76l2ib32uwre07y8u1nbmzxo30onz5izy7gu4bl2h3zy3vtfrp24lsghqg6rhmccpqqx8lbyp3j3z1yheg767izm1bjhxlbby8n7d599kjqlj3z4l5goliyy79f2s905qaxoen52wrsvohpsqq53pfyp3var8ysg5i3q0t9rwj8oahxnci8hmx2f56yryzxv9o7d0jdt9ja1szoic9sr9g1pjrdtucu1noueusii2kphbbsoeb7lkttengomwg94vjllfg0zleqjnpzmvzenfwqi3d2tlk4qoxffph1y5d7m3ap6sip9ro2zas69tma202ib290dydjsxkn2tosmdyoywbkuc33fe42hzaqorsk6ik8kv5uw5wywy4yh4zl11bwkk8mdoz0px87n16ggpeebnypori2kr0g61ttnggc33tbtkgfertk9vjib1rnxf29aub53tqssutk47fi0wwy53gp7qdwpk0f5672raa0id5fvzj6s010qs68hr3xaflrg3txsfdgrba2lfa9g1olh6qvplmi24ou1bghxio8fofqp4iwyp0tw2ipd16y9qajdlq7wqo5zzqj1ml23pjlyj6tglsn3zrnob1wk48oyjv8g5g1l6mvji87n8a1dm91ccjo3jigupiekheqpu4d54is == \y\o\5\y\r\t\7\d\k\z\i\t\1\1\w\o\f\r\r\s\m\p\b\9\l\z\5\v\p\j\h\s\1\7\z\y\z\d\4\u\7\i\c\2\2\5\z\c\g\k\a\b\4\u\b\8\p\a\1\8\p\j\3\8\1\s\s\f\m\p\e\b\n\b\j\m\1\e\g\b\2\o\5\r\7\g\h\t\6\p\c\6\8\1\r\a\r\2\i\x\t\p\u\i\q\3\p\l\b\p\f\g\m\q\8\0\d\7\8\1\z\z\c\f\d\i\7\1\g\a\u\g\9\e\8\b\p\w\5\l\v\r\0\e\z\z\1\2\5\t\0\k\5\5\t\3\f\m\5\4\l\w\d\5\h\7\6\9\e\7\w\h\q\4\0\n\h\w\o\o\d\f\m\w\z\i\q\h\w\y\t\t\o\k\t\u\w\4\u\k\o\2\e\r\x\m\2\g\2\s\a\y\l\8\e\1\7\9\j\e\j\4\u\t\m\s\8\s\0\u\6\o\h\9\n\3\f\d\4\a\b\4\j\s\h\t\a\x\1\v\5\3\0\b\g\y\d\c\p\h\w\z\1\r\8\n\s\l\b\b\7\i\7\j\l\q\2\v\q\m\z\q\g\8\j\h\k\l\l\m\9\p\8\a\w\q\t\9\y\u\b\f\n\u\4\j\5\e\u\d\7\3\5\p\w\c\9\d\9\b\y\j\o\j\t\u\z\7\2\i\4\p\3\l\8\r\u\k\w\3\3\d\n\8\x\v\3\b\t\j\n\7\f\t\j\5\s\7\6\l\2\i\b\3\2\u\w\r\e\0\7\y\8\u\1\n\b\m\z\x\o\3\0\o\n\z\5\i\z\y\7\g\u\4\b\l\2\h\3\z\y\3\v\t\f\r\p\2\4\l\s\g\h\q\g\6\r\h\m\c\c\p\q\q\x\8\l\b\y\p\3\j\3\z\1\y\h\e\g\7\6\7\i\z\m\1\b\j\h\x\l\b\b\y\8\n\7\d\5\9\9\k\j\q\l\j\3\z\4\l\5\g\o\l\i\y\y\7\9\f\2\s\9\0\5\q\a\x\o\e\n\5\2\w\r\s\v\o\h\p\s\q\q\5\3\p\f\y\p\3\v\a\r\8\y\s\g\5\i\3\q\0\t\9\r\w\j\8\o\a\h\x\n\c\i\8\h\m\x\2\f\5\6\y\r\y\z\x\v\9\o\7\d\0\j\d\t\9\j\a\1\s\z\o\i\c\9\s\r\9\g\1\p\j\r\d\t\u\c\u\1\n\o\u\e\u\s\i\i\2\k\p\h\b\b\s\o\e\b\7\l\k\t\t\e\n\g\o\m\w\g\9\4\v\j\l\l\f\g\0\z\l\e\q\j\n\p\z\m\v\z\e\n\f\w\q\i\3\d\2\t\l\k\4\q\o\x\f\f\p\h\1\y\5\d\7\m\3\a\p\6\s\i\p\9\r\o\2\z\a\s\6\9\t\m\a\2\0\2\i\b\2\9\0\d\y\d\j\s\x\k\n\2\t\o\s\m\d\y\o\y\w\b\k\u\c\3\3\f\e\4\2\h\z\a\q\o\r\s\k\6\i\k\8\k\v\5\u\w\5\w\y\w\y\4\y\h\4\z\l\1\1\b\w\k\k\8\m\d\o\z\0\p\x\8\7\n\1\6\g\g\p\e\e\b\n\y\p\o\r\i\2\k\r\0\g\6\1\t\t\n\g\g\c\3\3\t\b\t\k\g\f\e\r\t\k\9\v\j\i\b\1\r\n\x\f\2\9\a\u\b\5\3\t\q\s\s\u\t\k\4\7\f\i\0\w\w\y\5\3\g\p\7\q\d\w\p\k\0\f\5\6\7\2\r\a\a\0\i\d\5\f\v\z\j\6\s\0\1\0\q\s\6\8\h\r\3\x\a\f\l\r\g\3\t\x\s\f\d\g\r\b\a\2\l\f\a\9\g\1\o\l\h\6\q\v\p\l\m\i\2\4\o\u\1\b\g\h\x\i\o\8\f\o\f\q\p\4\i\w\y\p\0\t\w\2\i\p\d\1\6\y\9\q\a\j\d\l\q\7\w\q\o\5\z\z\q\j\1\m\l\2\3\p\j\l\y\j\6\t\g\l\s\n\3\z\r\n\o\b\1\w\k\4\8\o\y\j\v\8\g\5\g\1\l\6\m\v\j\i\8\7\n\8\a\1\d\m\9\1\c\c\j\o\3\j\i\g\u\p\i\e\k\h\e\q\p\u\4\d\5\4\i\s ]] 00:06:57.074 20:43:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:57.333 20:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:57.333 20:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:57.333 20:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:57.333 20:43:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.593 [2024-07-15 20:43:19.281656] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:06:57.593 [2024-07-15 20:43:19.281724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64051 ] 00:06:57.593 { 00:06:57.593 "subsystems": [ 00:06:57.593 { 00:06:57.593 "subsystem": "bdev", 00:06:57.593 "config": [ 00:06:57.593 { 00:06:57.593 "params": { 00:06:57.593 "block_size": 512, 00:06:57.593 "num_blocks": 1048576, 00:06:57.593 "name": "malloc0" 00:06:57.593 }, 00:06:57.593 "method": "bdev_malloc_create" 00:06:57.593 }, 00:06:57.593 { 00:06:57.593 "params": { 00:06:57.593 "filename": "/dev/zram1", 00:06:57.593 "name": "uring0" 00:06:57.593 }, 00:06:57.593 "method": "bdev_uring_create" 00:06:57.593 }, 00:06:57.593 { 00:06:57.593 "method": "bdev_wait_for_examine" 00:06:57.593 } 00:06:57.593 ] 00:06:57.593 } 00:06:57.593 ] 00:06:57.593 } 00:06:57.593 [2024-07-15 20:43:19.420905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.851 [2024-07-15 20:43:19.511596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.851 [2024-07-15 20:43:19.552512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.751  Copying: 198/512 [MB] (198 MBps) Copying: 398/512 [MB] (199 MBps) Copying: 512/512 [MB] (average 199 MBps) 00:07:00.751 00:07:00.751 20:43:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:00.751 20:43:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:00.751 20:43:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:00.751 20:43:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:00.751 20:43:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:00.751 20:43:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:00.751 20:43:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:00.751 20:43:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:00.751 [2024-07-15 20:43:22.651162] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:00.751 [2024-07-15 20:43:22.651238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64102 ] 00:07:01.010 { 00:07:01.010 "subsystems": [ 00:07:01.010 { 00:07:01.010 "subsystem": "bdev", 00:07:01.010 "config": [ 00:07:01.010 { 00:07:01.010 "params": { 00:07:01.010 "block_size": 512, 00:07:01.010 "num_blocks": 1048576, 00:07:01.010 "name": "malloc0" 00:07:01.010 }, 00:07:01.010 "method": "bdev_malloc_create" 00:07:01.010 }, 00:07:01.010 { 00:07:01.010 "params": { 00:07:01.010 "filename": "/dev/zram1", 00:07:01.010 "name": "uring0" 00:07:01.010 }, 00:07:01.010 "method": "bdev_uring_create" 00:07:01.010 }, 00:07:01.010 { 00:07:01.010 "params": { 00:07:01.010 "name": "uring0" 00:07:01.010 }, 00:07:01.010 "method": "bdev_uring_delete" 00:07:01.010 }, 00:07:01.010 { 00:07:01.010 "method": "bdev_wait_for_examine" 00:07:01.010 } 00:07:01.010 ] 00:07:01.010 } 00:07:01.010 ] 00:07:01.010 } 00:07:01.010 [2024-07-15 20:43:22.790108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.010 [2024-07-15 20:43:22.881541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.268 [2024-07-15 20:43:22.922867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.540  Copying: 0/0 [B] (average 0 Bps) 00:07:01.540 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.540 20:43:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:01.800 [2024-07-15 20:43:23.462229] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:01.800 [2024-07-15 20:43:23.462410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64126 ] 00:07:01.800 { 00:07:01.800 "subsystems": [ 00:07:01.800 { 00:07:01.800 "subsystem": "bdev", 00:07:01.800 "config": [ 00:07:01.800 { 00:07:01.800 "params": { 00:07:01.800 "block_size": 512, 00:07:01.800 "num_blocks": 1048576, 00:07:01.800 "name": "malloc0" 00:07:01.800 }, 00:07:01.800 "method": "bdev_malloc_create" 00:07:01.800 }, 00:07:01.800 { 00:07:01.800 "params": { 00:07:01.800 "filename": "/dev/zram1", 00:07:01.800 "name": "uring0" 00:07:01.800 }, 00:07:01.800 "method": "bdev_uring_create" 00:07:01.800 }, 00:07:01.800 { 00:07:01.800 "params": { 00:07:01.800 "name": "uring0" 00:07:01.800 }, 00:07:01.800 "method": "bdev_uring_delete" 00:07:01.800 }, 00:07:01.800 { 00:07:01.800 "method": "bdev_wait_for_examine" 00:07:01.800 } 00:07:01.800 ] 00:07:01.800 } 00:07:01.800 ] 00:07:01.800 } 00:07:02.058 [2024-07-15 20:43:23.756588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.058 [2024-07-15 20:43:23.851261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.058 [2024-07-15 20:43:23.892480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.316 [2024-07-15 20:43:24.056224] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:02.316 [2024-07-15 20:43:24.056271] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:02.316 [2024-07-15 20:43:24.056280] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:02.316 [2024-07-15 20:43:24.056290] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.572 [2024-07-15 20:43:24.300902] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:02.572 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:02.829 00:07:02.829 real 0m12.990s 00:07:02.829 user 0m8.567s 00:07:02.829 sys 0m10.580s 00:07:02.829 ************************************ 00:07:02.829 END TEST dd_uring_copy 00:07:02.829 ************************************ 00:07:02.829 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.829 20:43:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:02.829 20:43:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:02.829 00:07:02.829 real 0m13.199s 00:07:02.829 user 0m8.649s 00:07:02.829 sys 0m10.711s 00:07:02.829 20:43:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.829 ************************************ 00:07:02.829 END TEST spdk_dd_uring 00:07:02.829 ************************************ 00:07:02.829 20:43:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:03.088 20:43:24 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:03.088 20:43:24 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:03.088 20:43:24 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.088 20:43:24 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.088 20:43:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:03.088 ************************************ 00:07:03.088 START TEST spdk_dd_sparse 00:07:03.088 ************************************ 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:03.088 * Looking for test storage... 00:07:03.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:03.088 1+0 records in 00:07:03.088 1+0 records out 00:07:03.088 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00918969 s, 456 MB/s 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:03.088 1+0 records in 00:07:03.088 1+0 records out 00:07:03.088 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00940318 s, 446 MB/s 00:07:03.088 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:03.088 1+0 records in 00:07:03.088 1+0 records out 00:07:03.088 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00882784 s, 475 MB/s 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:03.089 ************************************ 00:07:03.089 START TEST dd_sparse_file_to_file 00:07:03.089 ************************************ 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:03.089 20:43:24 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:03.346 [2024-07-15 20:43:25.032380] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:03.346 [2024-07-15 20:43:25.032445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64230 ] 00:07:03.346 { 00:07:03.346 "subsystems": [ 00:07:03.346 { 00:07:03.346 "subsystem": "bdev", 00:07:03.346 "config": [ 00:07:03.346 { 00:07:03.346 "params": { 00:07:03.346 "block_size": 4096, 00:07:03.346 "filename": "dd_sparse_aio_disk", 00:07:03.346 "name": "dd_aio" 00:07:03.346 }, 00:07:03.346 "method": "bdev_aio_create" 00:07:03.346 }, 00:07:03.346 { 00:07:03.346 "params": { 00:07:03.346 "lvs_name": "dd_lvstore", 00:07:03.346 "bdev_name": "dd_aio" 00:07:03.346 }, 00:07:03.346 "method": "bdev_lvol_create_lvstore" 00:07:03.346 }, 00:07:03.346 { 00:07:03.346 "method": "bdev_wait_for_examine" 00:07:03.346 } 00:07:03.346 ] 00:07:03.346 } 00:07:03.346 ] 00:07:03.346 } 00:07:03.346 [2024-07-15 20:43:25.171109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.604 [2024-07-15 20:43:25.258336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.604 [2024-07-15 20:43:25.299574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.861  Copying: 12/36 [MB] (average 857 MBps) 00:07:03.861 00:07:03.861 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:03.861 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:03.861 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:03.861 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:03.861 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:03.861 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:03.861 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:03.862 00:07:03.862 real 0m0.626s 00:07:03.862 user 0m0.385s 00:07:03.862 sys 0m0.313s 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.862 ************************************ 00:07:03.862 END TEST dd_sparse_file_to_file 00:07:03.862 ************************************ 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:03.862 ************************************ 00:07:03.862 START TEST dd_sparse_file_to_bdev 00:07:03.862 ************************************ 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:03.862 20:43:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:03.862 [2024-07-15 20:43:25.728130] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:03.862 [2024-07-15 20:43:25.728205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64273 ] 00:07:03.862 { 00:07:03.862 "subsystems": [ 00:07:03.862 { 00:07:03.862 "subsystem": "bdev", 00:07:03.862 "config": [ 00:07:03.862 { 00:07:03.862 "params": { 00:07:03.862 "block_size": 4096, 00:07:03.862 "filename": "dd_sparse_aio_disk", 00:07:03.862 "name": "dd_aio" 00:07:03.862 }, 00:07:03.862 "method": "bdev_aio_create" 00:07:03.862 }, 00:07:03.862 { 00:07:03.862 "params": { 00:07:03.862 "lvs_name": "dd_lvstore", 00:07:03.862 "lvol_name": "dd_lvol", 00:07:03.862 "size_in_mib": 36, 00:07:03.862 "thin_provision": true 00:07:03.862 }, 00:07:03.862 "method": "bdev_lvol_create" 00:07:03.862 }, 00:07:03.862 { 00:07:03.862 "method": "bdev_wait_for_examine" 00:07:03.862 } 00:07:03.862 ] 00:07:03.862 } 00:07:03.862 ] 00:07:03.862 } 00:07:04.120 [2024-07-15 20:43:25.868767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.120 [2024-07-15 20:43:25.966706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.121 [2024-07-15 20:43:26.008007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.379  Copying: 12/36 [MB] (average 461 MBps) 00:07:04.379 00:07:04.638 00:07:04.638 real 0m0.614s 00:07:04.638 user 0m0.401s 00:07:04.638 sys 0m0.305s 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.638 ************************************ 00:07:04.638 END TEST dd_sparse_file_to_bdev 00:07:04.638 ************************************ 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:04.638 ************************************ 00:07:04.638 START TEST dd_sparse_bdev_to_file 00:07:04.638 ************************************ 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:04.638 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:04.638 [2024-07-15 20:43:26.408912] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:04.638 [2024-07-15 20:43:26.408978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64305 ] 00:07:04.638 { 00:07:04.638 "subsystems": [ 00:07:04.638 { 00:07:04.638 "subsystem": "bdev", 00:07:04.638 "config": [ 00:07:04.638 { 00:07:04.638 "params": { 00:07:04.638 "block_size": 4096, 00:07:04.638 "filename": "dd_sparse_aio_disk", 00:07:04.638 "name": "dd_aio" 00:07:04.638 }, 00:07:04.638 "method": "bdev_aio_create" 00:07:04.638 }, 00:07:04.638 { 00:07:04.638 "method": "bdev_wait_for_examine" 00:07:04.638 } 00:07:04.638 ] 00:07:04.638 } 00:07:04.638 ] 00:07:04.638 } 00:07:04.638 [2024-07-15 20:43:26.538715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.897 [2024-07-15 20:43:26.628432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.897 [2024-07-15 20:43:26.669810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.156  Copying: 12/36 [MB] (average 705 MBps) 00:07:05.156 00:07:05.156 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:05.156 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:05.156 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:05.156 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:05.156 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:05.156 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:05.156 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:05.156 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:05.156 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:05.156 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:05.156 00:07:05.156 real 0m0.610s 00:07:05.156 user 0m0.386s 00:07:05.157 sys 0m0.297s 00:07:05.157 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.157 ************************************ 00:07:05.157 20:43:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:05.157 END TEST dd_sparse_bdev_to_file 00:07:05.157 ************************************ 00:07:05.157 20:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:05.157 20:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:05.157 20:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:05.157 20:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:05.157 20:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:05.157 20:43:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:05.157 00:07:05.157 real 0m2.275s 00:07:05.157 user 0m1.313s 00:07:05.157 sys 0m1.206s 00:07:05.157 20:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.157 20:43:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:05.157 ************************************ 00:07:05.157 END TEST spdk_dd_sparse 00:07:05.157 ************************************ 00:07:05.416 20:43:27 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:05.416 20:43:27 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:05.416 20:43:27 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.416 20:43:27 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.416 20:43:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:05.416 ************************************ 00:07:05.416 START TEST spdk_dd_negative 00:07:05.416 ************************************ 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:05.416 * Looking for test storage... 00:07:05.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.416 ************************************ 00:07:05.416 START TEST dd_invalid_arguments 00:07:05.416 ************************************ 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.416 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:05.693 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:05.693 00:07:05.693 CPU options: 00:07:05.693 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:05.693 (like [0,1,10]) 00:07:05.693 --lcores lcore to CPU mapping list. The list is in the format: 00:07:05.693 [<,lcores[@CPUs]>...] 00:07:05.693 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:05.693 Within the group, '-' is used for range separator, 00:07:05.693 ',' is used for single number separator. 00:07:05.693 '( )' can be omitted for single element group, 00:07:05.693 '@' can be omitted if cpus and lcores have the same value 00:07:05.693 --disable-cpumask-locks Disable CPU core lock files. 00:07:05.693 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:05.693 pollers in the app support interrupt mode) 00:07:05.693 -p, --main-core main (primary) core for DPDK 00:07:05.693 00:07:05.693 Configuration options: 00:07:05.693 -c, --config, --json JSON config file 00:07:05.693 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:05.693 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:05.693 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:05.693 --rpcs-allowed comma-separated list of permitted RPCS 00:07:05.693 --json-ignore-init-errors don't exit on invalid config entry 00:07:05.693 00:07:05.693 Memory options: 00:07:05.693 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:05.693 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:05.693 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:05.693 -R, --huge-unlink unlink huge files after initialization 00:07:05.693 -n, --mem-channels number of memory channels used for DPDK 00:07:05.693 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:05.693 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:05.693 --no-huge run without using hugepages 00:07:05.693 -i, --shm-id shared memory ID (optional) 00:07:05.693 -g, --single-file-segments force creating just one hugetlbfs file 00:07:05.693 00:07:05.693 PCI options: 00:07:05.693 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:05.693 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:05.693 -u, --no-pci disable PCI access 00:07:05.693 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:05.693 00:07:05.693 Log options: 00:07:05.693 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:05.693 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:05.693 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:05.693 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:05.693 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:05.693 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:05.693 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:05.693 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:05.693 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:05.693 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:05.693 virtio_vfio_user, vmd) 00:07:05.693 --silence-noticelog disable notice level logging to stderr 00:07:05.693 00:07:05.693 Trace options: 00:07:05.693 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:05.693 setting 0 to disable trace (default 32768) 00:07:05.693 Tracepoints vary in size and can use more than one trace entry. 00:07:05.693 -e, --tpoint-group [:] 00:07:05.693 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:05.693 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:05.693 [2024-07-15 20:43:27.339342] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:05.693 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:05.693 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:05.693 a tracepoint group. First tpoint inside a group can be enabled by 00:07:05.693 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:05.693 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:05.693 in /include/spdk_internal/trace_defs.h 00:07:05.693 00:07:05.693 Other options: 00:07:05.693 -h, --help show this usage 00:07:05.693 -v, --version print SPDK version 00:07:05.693 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:05.693 --env-context Opaque context for use of the env implementation 00:07:05.693 00:07:05.693 Application specific: 00:07:05.693 [--------- DD Options ---------] 00:07:05.693 --if Input file. Must specify either --if or --ib. 00:07:05.693 --ib Input bdev. Must specifier either --if or --ib 00:07:05.693 --of Output file. Must specify either --of or --ob. 00:07:05.693 --ob Output bdev. Must specify either --of or --ob. 00:07:05.693 --iflag Input file flags. 00:07:05.693 --oflag Output file flags. 00:07:05.693 --bs I/O unit size (default: 4096) 00:07:05.693 --qd Queue depth (default: 2) 00:07:05.693 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:05.693 --skip Skip this many I/O units at start of input. (default: 0) 00:07:05.693 --seek Skip this many I/O units at start of output. (default: 0) 00:07:05.693 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:05.693 --sparse Enable hole skipping in input target 00:07:05.693 Available iflag and oflag values: 00:07:05.693 append - append mode 00:07:05.693 direct - use direct I/O for data 00:07:05.693 directory - fail unless a directory 00:07:05.693 dsync - use synchronized I/O for data 00:07:05.693 noatime - do not update access time 00:07:05.693 noctty - do not assign controlling terminal from file 00:07:05.693 nofollow - do not follow symlinks 00:07:05.693 nonblock - use non-blocking I/O 00:07:05.693 sync - use synchronized I/O for data and metadata 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.693 00:07:05.693 real 0m0.068s 00:07:05.693 user 0m0.041s 00:07:05.693 sys 0m0.025s 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:05.693 ************************************ 00:07:05.693 END TEST dd_invalid_arguments 00:07:05.693 ************************************ 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.693 ************************************ 00:07:05.693 START TEST dd_double_input 00:07:05.693 ************************************ 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:05.693 [2024-07-15 20:43:27.470244] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.693 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.693 00:07:05.693 real 0m0.069s 00:07:05.693 user 0m0.036s 00:07:05.693 sys 0m0.032s 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.694 ************************************ 00:07:05.694 END TEST dd_double_input 00:07:05.694 ************************************ 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.694 ************************************ 00:07:05.694 START TEST dd_double_output 00:07:05.694 ************************************ 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.694 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:05.966 [2024-07-15 20:43:27.604843] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.966 00:07:05.966 real 0m0.067s 00:07:05.966 user 0m0.033s 00:07:05.966 sys 0m0.033s 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:05.966 ************************************ 00:07:05.966 END TEST dd_double_output 00:07:05.966 ************************************ 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.966 ************************************ 00:07:05.966 START TEST dd_no_input 00:07:05.966 ************************************ 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:05.966 [2024-07-15 20:43:27.742376] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:05.966 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.967 00:07:05.967 real 0m0.073s 00:07:05.967 user 0m0.040s 00:07:05.967 sys 0m0.032s 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:05.967 ************************************ 00:07:05.967 END TEST dd_no_input 00:07:05.967 ************************************ 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.967 ************************************ 00:07:05.967 START TEST dd_no_output 00:07:05.967 ************************************ 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.967 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.967 [2024-07-15 20:43:27.875469] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.226 00:07:06.226 real 0m0.068s 00:07:06.226 user 0m0.035s 00:07:06.226 sys 0m0.032s 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:06.226 ************************************ 00:07:06.226 END TEST dd_no_output 00:07:06.226 ************************************ 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.226 ************************************ 00:07:06.226 START TEST dd_wrong_blocksize 00:07:06.226 ************************************ 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.226 20:43:27 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:06.226 [2024-07-15 20:43:27.997829] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.226 00:07:06.226 real 0m0.066s 00:07:06.226 user 0m0.038s 00:07:06.226 sys 0m0.027s 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:06.226 ************************************ 00:07:06.226 END TEST dd_wrong_blocksize 00:07:06.226 ************************************ 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.226 ************************************ 00:07:06.226 START TEST dd_smaller_blocksize 00:07:06.226 ************************************ 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.226 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:06.485 [2024-07-15 20:43:28.139654] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:06.485 [2024-07-15 20:43:28.139733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64529 ] 00:07:06.485 [2024-07-15 20:43:28.280225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.485 [2024-07-15 20:43:28.369814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.744 [2024-07-15 20:43:28.410155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.003 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:07.003 [2024-07-15 20:43:28.709506] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:07.003 [2024-07-15 20:43:28.709560] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.003 [2024-07-15 20:43:28.800855] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.003 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:07.003 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.003 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:07.003 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.003 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:07.003 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.003 00:07:07.003 real 0m0.805s 00:07:07.003 user 0m0.373s 00:07:07.003 sys 0m0.326s 00:07:07.003 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.003 ************************************ 00:07:07.003 20:43:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:07.003 END TEST dd_smaller_blocksize 00:07:07.003 ************************************ 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.263 ************************************ 00:07:07.263 START TEST dd_invalid_count 00:07:07.263 ************************************ 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.263 20:43:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:07.263 [2024-07-15 20:43:29.009426] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.263 00:07:07.263 real 0m0.066s 00:07:07.263 user 0m0.032s 00:07:07.263 sys 0m0.034s 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:07.263 ************************************ 00:07:07.263 END TEST dd_invalid_count 00:07:07.263 ************************************ 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.263 ************************************ 00:07:07.263 START TEST dd_invalid_oflag 00:07:07.263 ************************************ 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.263 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.264 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.264 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.264 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:07.264 [2024-07-15 20:43:29.142112] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:07.264 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:07.264 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.264 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.264 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.264 00:07:07.264 real 0m0.068s 00:07:07.264 user 0m0.032s 00:07:07.264 sys 0m0.035s 00:07:07.264 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.264 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:07.264 ************************************ 00:07:07.264 END TEST dd_invalid_oflag 00:07:07.264 ************************************ 00:07:07.523 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:07.523 20:43:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.524 ************************************ 00:07:07.524 START TEST dd_invalid_iflag 00:07:07.524 ************************************ 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:07.524 [2024-07-15 20:43:29.272951] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.524 00:07:07.524 real 0m0.069s 00:07:07.524 user 0m0.038s 00:07:07.524 sys 0m0.030s 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:07.524 ************************************ 00:07:07.524 END TEST dd_invalid_iflag 00:07:07.524 ************************************ 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.524 ************************************ 00:07:07.524 START TEST dd_unknown_flag 00:07:07.524 ************************************ 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.524 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:07.524 [2024-07-15 20:43:29.405718] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:07.524 [2024-07-15 20:43:29.405802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64622 ] 00:07:07.791 [2024-07-15 20:43:29.545603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.792 [2024-07-15 20:43:29.622636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.792 [2024-07-15 20:43:29.663003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.792 [2024-07-15 20:43:29.688229] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:07.792 [2024-07-15 20:43:29.688267] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.792 [2024-07-15 20:43:29.688305] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:07.792 [2024-07-15 20:43:29.688315] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.792 [2024-07-15 20:43:29.688485] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:07.792 [2024-07-15 20:43:29.688497] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.792 [2024-07-15 20:43:29.688535] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:07.792 [2024-07-15 20:43:29.688542] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:08.052 [2024-07-15 20:43:29.778234] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.052 00:07:08.052 real 0m0.513s 00:07:08.052 user 0m0.284s 00:07:08.052 sys 0m0.133s 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:08.052 ************************************ 00:07:08.052 END TEST dd_unknown_flag 00:07:08.052 ************************************ 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:08.052 ************************************ 00:07:08.052 START TEST dd_invalid_json 00:07:08.052 ************************************ 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.052 20:43:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:08.312 [2024-07-15 20:43:29.982563] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:08.312 [2024-07-15 20:43:29.982629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64651 ] 00:07:08.312 [2024-07-15 20:43:30.122908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.312 [2024-07-15 20:43:30.213128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.312 [2024-07-15 20:43:30.213192] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:08.312 [2024-07-15 20:43:30.213206] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:08.312 [2024-07-15 20:43:30.213215] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.312 [2024-07-15 20:43:30.213246] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:08.571 20:43:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:08.571 20:43:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.571 20:43:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:08.571 20:43:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:08.571 20:43:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:08.571 20:43:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.571 00:07:08.571 real 0m0.370s 00:07:08.571 user 0m0.193s 00:07:08.571 sys 0m0.074s 00:07:08.571 20:43:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.571 20:43:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:08.571 ************************************ 00:07:08.571 END TEST dd_invalid_json 00:07:08.571 ************************************ 00:07:08.571 20:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:08.571 00:07:08.571 real 0m3.217s 00:07:08.571 user 0m1.463s 00:07:08.571 sys 0m1.432s 00:07:08.571 20:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.571 20:43:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:08.571 ************************************ 00:07:08.571 END TEST spdk_dd_negative 00:07:08.571 ************************************ 00:07:08.571 20:43:30 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:08.571 00:07:08.571 real 1m9.694s 00:07:08.571 user 0m43.659s 00:07:08.571 sys 0m30.268s 00:07:08.571 20:43:30 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.571 20:43:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:08.571 ************************************ 00:07:08.571 END TEST spdk_dd 00:07:08.571 ************************************ 00:07:08.571 20:43:30 -- common/autotest_common.sh@1142 -- # return 0 00:07:08.571 20:43:30 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:08.571 20:43:30 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:08.571 20:43:30 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:08.571 20:43:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:08.571 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:07:08.831 20:43:30 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:08.831 20:43:30 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:08.831 20:43:30 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:08.831 20:43:30 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:08.831 20:43:30 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:08.831 20:43:30 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:08.831 20:43:30 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.831 20:43:30 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.831 20:43:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.831 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:07:08.831 ************************************ 00:07:08.831 START TEST nvmf_tcp 00:07:08.831 ************************************ 00:07:08.831 20:43:30 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.831 * Looking for test storage... 00:07:08.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.831 20:43:30 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.831 20:43:30 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.831 20:43:30 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.831 20:43:30 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.831 20:43:30 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.831 20:43:30 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.831 20:43:30 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:08.831 20:43:30 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:08.831 20:43:30 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.831 20:43:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:08.831 20:43:30 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:08.831 20:43:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.832 20:43:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.832 20:43:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.832 ************************************ 00:07:08.832 START TEST nvmf_host_management 00:07:08.832 ************************************ 00:07:08.832 20:43:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:09.091 * Looking for test storage... 00:07:09.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:09.091 20:43:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:09.091 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:09.091 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.091 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.091 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:09.092 Cannot find device "nvmf_init_br" 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:09.092 Cannot find device "nvmf_tgt_br" 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:09.092 Cannot find device "nvmf_tgt_br2" 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:09.092 Cannot find device "nvmf_init_br" 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:09.092 Cannot find device "nvmf_tgt_br" 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:09.092 Cannot find device "nvmf_tgt_br2" 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:09.092 Cannot find device "nvmf_br" 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:09.092 Cannot find device "nvmf_init_if" 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:09.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:09.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:09.092 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:09.352 20:43:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:09.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:07:09.352 00:07:09.352 --- 10.0.0.2 ping statistics --- 00:07:09.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.352 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:09.352 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:09.352 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:07:09.352 00:07:09.352 --- 10.0.0.3 ping statistics --- 00:07:09.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.352 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:09.352 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:09.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:07:09.613 00:07:09.613 --- 10.0.0.1 ping statistics --- 00:07:09.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.613 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64907 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64907 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 64907 ']' 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.613 20:43:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.613 [2024-07-15 20:43:31.363991] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:09.613 [2024-07-15 20:43:31.364050] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.613 [2024-07-15 20:43:31.506251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.872 [2024-07-15 20:43:31.596056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.872 [2024-07-15 20:43:31.596103] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.872 [2024-07-15 20:43:31.596113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.872 [2024-07-15 20:43:31.596121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.872 [2024-07-15 20:43:31.596128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.872 [2024-07-15 20:43:31.596330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.872 [2024-07-15 20:43:31.597354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.872 [2024-07-15 20:43:31.597546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:09.872 [2024-07-15 20:43:31.597580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.873 [2024-07-15 20:43:31.639191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.440 [2024-07-15 20:43:32.238339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.440 Malloc0 00:07:10.440 [2024-07-15 20:43:32.320536] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.440 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64964 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64964 /var/tmp/bdevperf.sock 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 64964 ']' 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:10.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:10.700 { 00:07:10.700 "params": { 00:07:10.700 "name": "Nvme$subsystem", 00:07:10.700 "trtype": "$TEST_TRANSPORT", 00:07:10.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:10.700 "adrfam": "ipv4", 00:07:10.700 "trsvcid": "$NVMF_PORT", 00:07:10.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:10.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:10.700 "hdgst": ${hdgst:-false}, 00:07:10.700 "ddgst": ${ddgst:-false} 00:07:10.700 }, 00:07:10.700 "method": "bdev_nvme_attach_controller" 00:07:10.700 } 00:07:10.700 EOF 00:07:10.700 )") 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:10.700 20:43:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:10.700 "params": { 00:07:10.700 "name": "Nvme0", 00:07:10.700 "trtype": "tcp", 00:07:10.700 "traddr": "10.0.0.2", 00:07:10.700 "adrfam": "ipv4", 00:07:10.700 "trsvcid": "4420", 00:07:10.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:10.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:10.700 "hdgst": false, 00:07:10.700 "ddgst": false 00:07:10.700 }, 00:07:10.700 "method": "bdev_nvme_attach_controller" 00:07:10.700 }' 00:07:10.700 [2024-07-15 20:43:32.433535] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:10.700 [2024-07-15 20:43:32.434191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64964 ] 00:07:10.700 [2024-07-15 20:43:32.576243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.958 [2024-07-15 20:43:32.660894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.958 [2024-07-15 20:43:32.710668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.958 Running I/O for 10 seconds... 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.527 [2024-07-15 20:43:33.340567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.527 [2024-07-15 20:43:33.340787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.527 [2024-07-15 20:43:33.340950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.527 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.527 [2024-07-15 20:43:33.341133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.527 [2024-07-15 20:43:33.341211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.527 [2024-07-15 20:43:33.341257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.527 [2024-07-15 20:43:33.341303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128task offset: 8192 on job bdev=Nvme0n1 fails 00:07:11.527 00:07:11.527 Latency(us) 00:07:11.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.527 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:11.527 Job: Nvme0n1 ended in about 0.52 seconds with error 00:07:11.527 Verification LBA range: start 0x0 length 0x400 00:07:11.527 Nvme0n1 : 0.52 2090.33 130.65 122.96 0.00 28264.65 2618.81 28004.14 00:07:11.527 =================================================================================================================== 00:07:11.527 Total : 2090.33 130.65 122.96 0.00 28264.65 2618.81 28004.14 00:07:11.527 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:11.527 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.527 [2024-07-15 20:43:33.341387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.527 [2024-07-15 20:43:33.341401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.527 [2024-07-15 20:43:33.341410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.527 [2024-07-15 20:43:33.341420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.527 [2024-07-15 20:43:33.341429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.527 [2024-07-15 20:43:33.341440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.527 [2024-07-15 20:43:33.341449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.527 [2024-07-15 20:43:33.341459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.527 [2024-07-15 20:43:33.341467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.527 [2024-07-15 20:43:33.341478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.527 [2024-07-15 20:43:33.341486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.527 [2024-07-15 20:43:33.341496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.527 [2024-07-15 20:43:33.341505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.527 [2024-07-15 20:43:33.341515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.527 [2024-07-15 20:43:33.341524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.527 [2024-07-15 20:43:33.341534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.341980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.341989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.528 [2024-07-15 20:43:33.342571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1314ec0 is same with the state(5) to be set 00:07:11.528 [2024-07-15 20:43:33.342646] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1314ec0 was disconnected and freed. reset controller. 00:07:11.528 [2024-07-15 20:43:33.342756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:11.528 [2024-07-15 20:43:33.342768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:11.528 [2024-07-15 20:43:33.342786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:11.528 [2024-07-15 20:43:33.342804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:11.528 [2024-07-15 20:43:33.342822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.528 [2024-07-15 20:43:33.342831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130cd50 is same with the state(5) to be set 00:07:11.528 [2024-07-15 20:43:33.343683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:11.528 [2024-07-15 20:43:33.345595] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.528 [2024-07-15 20:43:33.345611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130cd50 (9): Bad file descriptor 00:07:11.528 [2024-07-15 20:43:33.349294] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:11.528 [2024-07-15 20:43:33.349511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:11.529 [2024-07-15 20:43:33.349674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:11.529 [2024-07-15 20:43:33.349766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:11.529 [2024-07-15 20:43:33.349843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:11.529 [2024-07-15 20:43:33.349891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:11.529 [2024-07-15 20:43:33.349983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x130cd50 00:07:11.529 [2024-07-15 20:43:33.350049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130cd50 (9): Bad file descriptor 00:07:11.529 [2024-07-15 20:43:33.350146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:11.529 [2024-07-15 20:43:33.350207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:11.529 [2024-07-15 20:43:33.350298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:11.529 [2024-07-15 20:43:33.350334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:11.529 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.529 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.529 20:43:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.529 20:43:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:12.463 20:43:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64964 00:07:12.463 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64964) - No such process 00:07:12.463 20:43:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:12.463 20:43:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:12.722 20:43:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:12.722 20:43:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:12.722 20:43:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:12.722 20:43:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:12.722 20:43:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:12.722 20:43:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:12.722 { 00:07:12.722 "params": { 00:07:12.722 "name": "Nvme$subsystem", 00:07:12.722 "trtype": "$TEST_TRANSPORT", 00:07:12.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:12.722 "adrfam": "ipv4", 00:07:12.722 "trsvcid": "$NVMF_PORT", 00:07:12.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:12.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:12.722 "hdgst": ${hdgst:-false}, 00:07:12.722 "ddgst": ${ddgst:-false} 00:07:12.722 }, 00:07:12.722 "method": "bdev_nvme_attach_controller" 00:07:12.722 } 00:07:12.722 EOF 00:07:12.722 )") 00:07:12.722 20:43:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:12.722 20:43:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:12.722 20:43:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:12.722 20:43:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:12.722 "params": { 00:07:12.722 "name": "Nvme0", 00:07:12.722 "trtype": "tcp", 00:07:12.722 "traddr": "10.0.0.2", 00:07:12.722 "adrfam": "ipv4", 00:07:12.722 "trsvcid": "4420", 00:07:12.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:12.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:12.722 "hdgst": false, 00:07:12.722 "ddgst": false 00:07:12.722 }, 00:07:12.722 "method": "bdev_nvme_attach_controller" 00:07:12.722 }' 00:07:12.722 [2024-07-15 20:43:34.428643] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:12.722 [2024-07-15 20:43:34.428713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64997 ] 00:07:12.722 [2024-07-15 20:43:34.565656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.980 [2024-07-15 20:43:34.651558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.980 [2024-07-15 20:43:34.700992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.980 Running I/O for 1 seconds... 00:07:14.358 00:07:14.358 Latency(us) 00:07:14.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.358 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:14.358 Verification LBA range: start 0x0 length 0x400 00:07:14.358 Nvme0n1 : 1.03 2184.76 136.55 0.00 0.00 28832.86 3053.08 27161.91 00:07:14.358 =================================================================================================================== 00:07:14.358 Total : 2184.76 136.55 0.00 0.00 28832.86 3053.08 27161.91 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.358 rmmod nvme_tcp 00:07:14.358 rmmod nvme_fabrics 00:07:14.358 rmmod nvme_keyring 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64907 ']' 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64907 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 64907 ']' 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 64907 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64907 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64907' 00:07:14.358 killing process with pid 64907 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 64907 00:07:14.358 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 64907 00:07:14.617 [2024-07-15 20:43:36.366333] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:14.617 00:07:14.617 real 0m5.745s 00:07:14.617 user 0m21.365s 00:07:14.617 sys 0m1.626s 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.617 20:43:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.617 ************************************ 00:07:14.617 END TEST nvmf_host_management 00:07:14.617 ************************************ 00:07:14.617 20:43:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:14.617 20:43:36 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:14.617 20:43:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:14.617 20:43:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.617 20:43:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.617 ************************************ 00:07:14.617 START TEST nvmf_lvol 00:07:14.617 ************************************ 00:07:14.617 20:43:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:14.877 * Looking for test storage... 00:07:14.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.877 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:14.878 Cannot find device "nvmf_tgt_br" 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:14.878 Cannot find device "nvmf_tgt_br2" 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:14.878 Cannot find device "nvmf_tgt_br" 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:14.878 Cannot find device "nvmf_tgt_br2" 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:14.878 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:15.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:15.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:15.137 20:43:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:15.137 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:15.137 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:15.137 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:15.137 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:15.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:07:15.397 00:07:15.397 --- 10.0.0.2 ping statistics --- 00:07:15.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.397 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:15.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:15.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:07:15.397 00:07:15.397 --- 10.0.0.3 ping statistics --- 00:07:15.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.397 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:15.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:07:15.397 00:07:15.397 --- 10.0.0.1 ping statistics --- 00:07:15.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.397 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65217 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65217 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65217 ']' 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.397 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:15.397 [2024-07-15 20:43:37.151614] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:15.397 [2024-07-15 20:43:37.151675] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.397 [2024-07-15 20:43:37.294689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.656 [2024-07-15 20:43:37.375111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.656 [2024-07-15 20:43:37.375163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.656 [2024-07-15 20:43:37.375181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.656 [2024-07-15 20:43:37.375189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.656 [2024-07-15 20:43:37.375211] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.656 [2024-07-15 20:43:37.375425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.656 [2024-07-15 20:43:37.375607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.656 [2024-07-15 20:43:37.375610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.656 [2024-07-15 20:43:37.416258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.225 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.225 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:16.225 20:43:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:16.225 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.225 20:43:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.225 20:43:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.225 20:43:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:16.484 [2024-07-15 20:43:38.173323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.484 20:43:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:16.743 20:43:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:16.743 20:43:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:16.743 20:43:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:16.743 20:43:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:17.002 20:43:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:17.261 20:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4598a482-1f23-4818-9ec9-c749e149f13d 00:07:17.261 20:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4598a482-1f23-4818-9ec9-c749e149f13d lvol 20 00:07:17.520 20:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=45fce56b-7735-4605-b0b4-410289452aff 00:07:17.520 20:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:17.520 20:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 45fce56b-7735-4605-b0b4-410289452aff 00:07:17.779 20:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:18.038 [2024-07-15 20:43:39.722729] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.038 20:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.038 20:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:18.038 20:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65281 00:07:18.038 20:43:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:19.431 20:43:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 45fce56b-7735-4605-b0b4-410289452aff MY_SNAPSHOT 00:07:19.431 20:43:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b6f8b9b9-59ce-4de8-989b-dda7ef2d872a 00:07:19.431 20:43:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 45fce56b-7735-4605-b0b4-410289452aff 30 00:07:19.688 20:43:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b6f8b9b9-59ce-4de8-989b-dda7ef2d872a MY_CLONE 00:07:19.688 20:43:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b12c3df3-126f-46de-9594-42c6d64617a6 00:07:19.688 20:43:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b12c3df3-126f-46de-9594-42c6d64617a6 00:07:20.254 20:43:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65281 00:07:28.381 Initializing NVMe Controllers 00:07:28.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:28.381 Controller IO queue size 128, less than required. 00:07:28.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:28.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:28.381 Initialization complete. Launching workers. 00:07:28.381 ======================================================== 00:07:28.381 Latency(us) 00:07:28.381 Device Information : IOPS MiB/s Average min max 00:07:28.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12204.40 47.67 10489.74 764.96 91229.91 00:07:28.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12139.10 47.42 10545.23 2112.83 47963.07 00:07:28.381 ======================================================== 00:07:28.381 Total : 24343.50 95.09 10517.41 764.96 91229.91 00:07:28.381 00:07:28.381 20:43:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:28.639 20:43:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 45fce56b-7735-4605-b0b4-410289452aff 00:07:28.897 20:43:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4598a482-1f23-4818-9ec9-c749e149f13d 00:07:28.897 20:43:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:28.897 20:43:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:28.897 20:43:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:28.897 20:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.897 20:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:28.897 20:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.897 20:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:28.897 20:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.897 20:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.155 rmmod nvme_tcp 00:07:29.155 rmmod nvme_fabrics 00:07:29.155 rmmod nvme_keyring 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65217 ']' 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65217 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65217 ']' 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65217 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65217 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.155 killing process with pid 65217 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65217' 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65217 00:07:29.155 20:43:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65217 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:29.413 00:07:29.413 real 0m14.669s 00:07:29.413 user 0m59.746s 00:07:29.413 sys 0m5.177s 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.413 ************************************ 00:07:29.413 END TEST nvmf_lvol 00:07:29.413 ************************************ 00:07:29.413 20:43:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:29.413 20:43:51 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:29.413 20:43:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.413 20:43:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.413 20:43:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.413 ************************************ 00:07:29.413 START TEST nvmf_lvs_grow 00:07:29.413 ************************************ 00:07:29.413 20:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:29.774 * Looking for test storage... 00:07:29.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:29.774 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:29.775 Cannot find device "nvmf_tgt_br" 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.775 Cannot find device "nvmf_tgt_br2" 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:29.775 Cannot find device "nvmf_tgt_br" 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:29.775 Cannot find device "nvmf_tgt_br2" 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:29.775 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:30.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:07:30.033 00:07:30.033 --- 10.0.0.2 ping statistics --- 00:07:30.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.033 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:30.033 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:30.033 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:07:30.033 00:07:30.033 --- 10.0.0.3 ping statistics --- 00:07:30.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.033 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:30.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:30.033 00:07:30.033 --- 10.0.0.1 ping statistics --- 00:07:30.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.033 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65609 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65609 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65609 ']' 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.033 20:43:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.033 [2024-07-15 20:43:51.862582] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:30.033 [2024-07-15 20:43:51.862645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.291 [2024-07-15 20:43:52.004738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.291 [2024-07-15 20:43:52.091651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.292 [2024-07-15 20:43:52.091701] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.292 [2024-07-15 20:43:52.091710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.292 [2024-07-15 20:43:52.091718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.292 [2024-07-15 20:43:52.091725] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.292 [2024-07-15 20:43:52.091754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.292 [2024-07-15 20:43:52.132618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.858 20:43:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.858 20:43:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:07:30.858 20:43:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.858 20:43:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:30.858 20:43:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.858 20:43:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.858 20:43:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:31.117 [2024-07-15 20:43:52.917323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.117 ************************************ 00:07:31.117 START TEST lvs_grow_clean 00:07:31.117 ************************************ 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:31.117 20:43:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:31.374 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:31.374 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:31.630 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b1961718-9067-41bf-bc71-85d60ef71328 00:07:31.630 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b1961718-9067-41bf-bc71-85d60ef71328 00:07:31.630 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:31.630 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:31.630 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:31.630 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b1961718-9067-41bf-bc71-85d60ef71328 lvol 150 00:07:31.887 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=30bac455-9630-441f-9c30-7eb8750e2abf 00:07:31.887 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:31.887 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:32.145 [2024-07-15 20:43:53.905486] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:32.145 [2024-07-15 20:43:53.905551] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:32.145 true 00:07:32.145 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:32.145 20:43:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b1961718-9067-41bf-bc71-85d60ef71328 00:07:32.404 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:32.404 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:32.404 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 30bac455-9630-441f-9c30-7eb8750e2abf 00:07:32.662 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:32.920 [2024-07-15 20:43:54.664642] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.920 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:33.179 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65685 00:07:33.179 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:33.179 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:33.179 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65685 /var/tmp/bdevperf.sock 00:07:33.179 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65685 ']' 00:07:33.179 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:33.179 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:33.179 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:33.179 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.179 20:43:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:33.179 [2024-07-15 20:43:54.911503] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:33.179 [2024-07-15 20:43:54.911595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65685 ] 00:07:33.179 [2024-07-15 20:43:55.053972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.437 [2024-07-15 20:43:55.150127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.437 [2024-07-15 20:43:55.191036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:34.003 20:43:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.003 20:43:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:07:34.003 20:43:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:34.261 Nvme0n1 00:07:34.261 20:43:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:34.519 [ 00:07:34.519 { 00:07:34.520 "name": "Nvme0n1", 00:07:34.520 "aliases": [ 00:07:34.520 "30bac455-9630-441f-9c30-7eb8750e2abf" 00:07:34.520 ], 00:07:34.520 "product_name": "NVMe disk", 00:07:34.520 "block_size": 4096, 00:07:34.520 "num_blocks": 38912, 00:07:34.520 "uuid": "30bac455-9630-441f-9c30-7eb8750e2abf", 00:07:34.520 "assigned_rate_limits": { 00:07:34.520 "rw_ios_per_sec": 0, 00:07:34.520 "rw_mbytes_per_sec": 0, 00:07:34.520 "r_mbytes_per_sec": 0, 00:07:34.520 "w_mbytes_per_sec": 0 00:07:34.520 }, 00:07:34.520 "claimed": false, 00:07:34.520 "zoned": false, 00:07:34.520 "supported_io_types": { 00:07:34.520 "read": true, 00:07:34.520 "write": true, 00:07:34.520 "unmap": true, 00:07:34.520 "flush": true, 00:07:34.520 "reset": true, 00:07:34.520 "nvme_admin": true, 00:07:34.520 "nvme_io": true, 00:07:34.520 "nvme_io_md": false, 00:07:34.520 "write_zeroes": true, 00:07:34.520 "zcopy": false, 00:07:34.520 "get_zone_info": false, 00:07:34.520 "zone_management": false, 00:07:34.520 "zone_append": false, 00:07:34.520 "compare": true, 00:07:34.520 "compare_and_write": true, 00:07:34.520 "abort": true, 00:07:34.520 "seek_hole": false, 00:07:34.520 "seek_data": false, 00:07:34.520 "copy": true, 00:07:34.520 "nvme_iov_md": false 00:07:34.520 }, 00:07:34.520 "memory_domains": [ 00:07:34.520 { 00:07:34.520 "dma_device_id": "system", 00:07:34.520 "dma_device_type": 1 00:07:34.520 } 00:07:34.520 ], 00:07:34.520 "driver_specific": { 00:07:34.520 "nvme": [ 00:07:34.520 { 00:07:34.520 "trid": { 00:07:34.520 "trtype": "TCP", 00:07:34.520 "adrfam": "IPv4", 00:07:34.520 "traddr": "10.0.0.2", 00:07:34.520 "trsvcid": "4420", 00:07:34.520 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:34.520 }, 00:07:34.520 "ctrlr_data": { 00:07:34.520 "cntlid": 1, 00:07:34.520 "vendor_id": "0x8086", 00:07:34.520 "model_number": "SPDK bdev Controller", 00:07:34.520 "serial_number": "SPDK0", 00:07:34.520 "firmware_revision": "24.09", 00:07:34.520 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:34.520 "oacs": { 00:07:34.520 "security": 0, 00:07:34.520 "format": 0, 00:07:34.520 "firmware": 0, 00:07:34.520 "ns_manage": 0 00:07:34.520 }, 00:07:34.520 "multi_ctrlr": true, 00:07:34.520 "ana_reporting": false 00:07:34.520 }, 00:07:34.520 "vs": { 00:07:34.520 "nvme_version": "1.3" 00:07:34.520 }, 00:07:34.520 "ns_data": { 00:07:34.520 "id": 1, 00:07:34.520 "can_share": true 00:07:34.520 } 00:07:34.520 } 00:07:34.520 ], 00:07:34.520 "mp_policy": "active_passive" 00:07:34.520 } 00:07:34.520 } 00:07:34.520 ] 00:07:34.520 20:43:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65704 00:07:34.520 20:43:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:34.520 20:43:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:34.520 Running I/O for 10 seconds... 00:07:35.454 Latency(us) 00:07:35.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.454 Nvme0n1 : 1.00 10785.00 42.13 0.00 0.00 0.00 0.00 0.00 00:07:35.454 =================================================================================================================== 00:07:35.454 Total : 10785.00 42.13 0.00 0.00 0.00 0.00 0.00 00:07:35.454 00:07:36.386 20:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b1961718-9067-41bf-bc71-85d60ef71328 00:07:36.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.386 Nvme0n1 : 2.00 10599.50 41.40 0.00 0.00 0.00 0.00 0.00 00:07:36.386 =================================================================================================================== 00:07:36.386 Total : 10599.50 41.40 0.00 0.00 0.00 0.00 0.00 00:07:36.386 00:07:36.644 true 00:07:36.644 20:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b1961718-9067-41bf-bc71-85d60ef71328 00:07:36.644 20:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:36.902 20:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:36.902 20:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:36.902 20:43:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65704 00:07:37.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.468 Nvme0n1 : 3.00 10537.67 41.16 0.00 0.00 0.00 0.00 0.00 00:07:37.468 =================================================================================================================== 00:07:37.468 Total : 10537.67 41.16 0.00 0.00 0.00 0.00 0.00 00:07:37.468 00:07:38.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.402 Nvme0n1 : 4.00 10566.75 41.28 0.00 0.00 0.00 0.00 0.00 00:07:38.402 =================================================================================================================== 00:07:38.402 Total : 10566.75 41.28 0.00 0.00 0.00 0.00 0.00 00:07:38.402 00:07:39.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.814 Nvme0n1 : 5.00 10533.80 41.15 0.00 0.00 0.00 0.00 0.00 00:07:39.814 =================================================================================================================== 00:07:39.815 Total : 10533.80 41.15 0.00 0.00 0.00 0.00 0.00 00:07:39.815 00:07:40.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.382 Nvme0n1 : 6.00 10504.33 41.03 0.00 0.00 0.00 0.00 0.00 00:07:40.382 =================================================================================================================== 00:07:40.382 Total : 10504.33 41.03 0.00 0.00 0.00 0.00 0.00 00:07:40.382 00:07:41.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.762 Nvme0n1 : 7.00 10473.29 40.91 0.00 0.00 0.00 0.00 0.00 00:07:41.762 =================================================================================================================== 00:07:41.762 Total : 10473.29 40.91 0.00 0.00 0.00 0.00 0.00 00:07:41.762 00:07:42.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.699 Nvme0n1 : 8.00 10447.12 40.81 0.00 0.00 0.00 0.00 0.00 00:07:42.699 =================================================================================================================== 00:07:42.699 Total : 10447.12 40.81 0.00 0.00 0.00 0.00 0.00 00:07:42.699 00:07:43.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.636 Nvme0n1 : 9.00 10426.33 40.73 0.00 0.00 0.00 0.00 0.00 00:07:43.636 =================================================================================================================== 00:07:43.636 Total : 10426.33 40.73 0.00 0.00 0.00 0.00 0.00 00:07:43.636 00:07:44.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.572 Nvme0n1 : 10.00 10398.90 40.62 0.00 0.00 0.00 0.00 0.00 00:07:44.572 =================================================================================================================== 00:07:44.572 Total : 10398.90 40.62 0.00 0.00 0.00 0.00 0.00 00:07:44.572 00:07:44.572 00:07:44.572 Latency(us) 00:07:44.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.572 Nvme0n1 : 10.00 10408.22 40.66 0.00 0.00 12294.51 7895.90 26740.79 00:07:44.572 =================================================================================================================== 00:07:44.572 Total : 10408.22 40.66 0.00 0.00 12294.51 7895.90 26740.79 00:07:44.572 0 00:07:44.572 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65685 00:07:44.572 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65685 ']' 00:07:44.572 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65685 00:07:44.572 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:07:44.572 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.572 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65685 00:07:44.572 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:44.572 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:44.572 killing process with pid 65685 00:07:44.572 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65685' 00:07:44.572 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65685 00:07:44.572 Received shutdown signal, test time was about 10.000000 seconds 00:07:44.572 00:07:44.572 Latency(us) 00:07:44.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.572 =================================================================================================================== 00:07:44.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:44.572 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65685 00:07:44.831 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.831 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:45.090 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b1961718-9067-41bf-bc71-85d60ef71328 00:07:45.090 20:44:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:45.349 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:45.349 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:45.349 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:45.349 [2024-07-15 20:44:07.246982] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b1961718-9067-41bf-bc71-85d60ef71328 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b1961718-9067-41bf-bc71-85d60ef71328 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b1961718-9067-41bf-bc71-85d60ef71328 00:07:45.607 request: 00:07:45.607 { 00:07:45.607 "uuid": "b1961718-9067-41bf-bc71-85d60ef71328", 00:07:45.607 "method": "bdev_lvol_get_lvstores", 00:07:45.607 "req_id": 1 00:07:45.607 } 00:07:45.607 Got JSON-RPC error response 00:07:45.607 response: 00:07:45.607 { 00:07:45.607 "code": -19, 00:07:45.607 "message": "No such device" 00:07:45.607 } 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.607 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.866 aio_bdev 00:07:45.866 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 30bac455-9630-441f-9c30-7eb8750e2abf 00:07:45.866 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=30bac455-9630-441f-9c30-7eb8750e2abf 00:07:45.866 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:45.866 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:07:45.866 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:45.866 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:45.866 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:46.125 20:44:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 30bac455-9630-441f-9c30-7eb8750e2abf -t 2000 00:07:46.383 [ 00:07:46.383 { 00:07:46.383 "name": "30bac455-9630-441f-9c30-7eb8750e2abf", 00:07:46.383 "aliases": [ 00:07:46.383 "lvs/lvol" 00:07:46.383 ], 00:07:46.383 "product_name": "Logical Volume", 00:07:46.383 "block_size": 4096, 00:07:46.383 "num_blocks": 38912, 00:07:46.383 "uuid": "30bac455-9630-441f-9c30-7eb8750e2abf", 00:07:46.383 "assigned_rate_limits": { 00:07:46.383 "rw_ios_per_sec": 0, 00:07:46.383 "rw_mbytes_per_sec": 0, 00:07:46.383 "r_mbytes_per_sec": 0, 00:07:46.383 "w_mbytes_per_sec": 0 00:07:46.383 }, 00:07:46.383 "claimed": false, 00:07:46.383 "zoned": false, 00:07:46.383 "supported_io_types": { 00:07:46.383 "read": true, 00:07:46.383 "write": true, 00:07:46.383 "unmap": true, 00:07:46.383 "flush": false, 00:07:46.383 "reset": true, 00:07:46.383 "nvme_admin": false, 00:07:46.383 "nvme_io": false, 00:07:46.383 "nvme_io_md": false, 00:07:46.383 "write_zeroes": true, 00:07:46.383 "zcopy": false, 00:07:46.383 "get_zone_info": false, 00:07:46.383 "zone_management": false, 00:07:46.383 "zone_append": false, 00:07:46.383 "compare": false, 00:07:46.383 "compare_and_write": false, 00:07:46.383 "abort": false, 00:07:46.383 "seek_hole": true, 00:07:46.383 "seek_data": true, 00:07:46.383 "copy": false, 00:07:46.383 "nvme_iov_md": false 00:07:46.383 }, 00:07:46.383 "driver_specific": { 00:07:46.383 "lvol": { 00:07:46.383 "lvol_store_uuid": "b1961718-9067-41bf-bc71-85d60ef71328", 00:07:46.383 "base_bdev": "aio_bdev", 00:07:46.383 "thin_provision": false, 00:07:46.383 "num_allocated_clusters": 38, 00:07:46.383 "snapshot": false, 00:07:46.383 "clone": false, 00:07:46.383 "esnap_clone": false 00:07:46.383 } 00:07:46.383 } 00:07:46.383 } 00:07:46.383 ] 00:07:46.383 20:44:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:07:46.383 20:44:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b1961718-9067-41bf-bc71-85d60ef71328 00:07:46.383 20:44:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:46.383 20:44:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:46.383 20:44:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:46.383 20:44:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b1961718-9067-41bf-bc71-85d60ef71328 00:07:46.642 20:44:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:46.642 20:44:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 30bac455-9630-441f-9c30-7eb8750e2abf 00:07:46.900 20:44:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b1961718-9067-41bf-bc71-85d60ef71328 00:07:47.158 20:44:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:47.158 20:44:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:47.724 00:07:47.724 real 0m16.426s 00:07:47.724 user 0m14.535s 00:07:47.724 sys 0m2.947s 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.724 ************************************ 00:07:47.724 END TEST lvs_grow_clean 00:07:47.724 ************************************ 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.724 ************************************ 00:07:47.724 START TEST lvs_grow_dirty 00:07:47.724 ************************************ 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:47.724 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.981 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:47.981 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:47.981 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:07:47.981 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:07:47.981 20:44:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:48.239 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:48.239 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:48.239 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 lvol 150 00:07:48.497 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=15f64ce6-634a-4e7c-b24d-c61f9409f046 00:07:48.497 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:48.497 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:48.497 [2024-07-15 20:44:10.390076] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:48.497 [2024-07-15 20:44:10.390144] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:48.497 true 00:07:48.756 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:48.756 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:07:48.756 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:48.756 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.014 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 15f64ce6-634a-4e7c-b24d-c61f9409f046 00:07:49.273 20:44:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:49.273 [2024-07-15 20:44:11.137726] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.273 20:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.531 20:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65933 00:07:49.531 20:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:49.531 20:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:49.531 20:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65933 /var/tmp/bdevperf.sock 00:07:49.531 20:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 65933 ']' 00:07:49.531 20:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:49.531 20:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:49.531 20:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:49.531 20:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.531 20:44:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:49.531 [2024-07-15 20:44:11.372857] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:07:49.531 [2024-07-15 20:44:11.373352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65933 ] 00:07:49.790 [2024-07-15 20:44:11.512595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.790 [2024-07-15 20:44:11.589367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.790 [2024-07-15 20:44:11.629904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:50.358 20:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.358 20:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:07:50.358 20:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:50.616 Nvme0n1 00:07:50.616 20:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:50.874 [ 00:07:50.874 { 00:07:50.874 "name": "Nvme0n1", 00:07:50.874 "aliases": [ 00:07:50.874 "15f64ce6-634a-4e7c-b24d-c61f9409f046" 00:07:50.874 ], 00:07:50.874 "product_name": "NVMe disk", 00:07:50.874 "block_size": 4096, 00:07:50.874 "num_blocks": 38912, 00:07:50.874 "uuid": "15f64ce6-634a-4e7c-b24d-c61f9409f046", 00:07:50.874 "assigned_rate_limits": { 00:07:50.874 "rw_ios_per_sec": 0, 00:07:50.874 "rw_mbytes_per_sec": 0, 00:07:50.874 "r_mbytes_per_sec": 0, 00:07:50.874 "w_mbytes_per_sec": 0 00:07:50.874 }, 00:07:50.874 "claimed": false, 00:07:50.874 "zoned": false, 00:07:50.874 "supported_io_types": { 00:07:50.874 "read": true, 00:07:50.874 "write": true, 00:07:50.874 "unmap": true, 00:07:50.874 "flush": true, 00:07:50.874 "reset": true, 00:07:50.874 "nvme_admin": true, 00:07:50.874 "nvme_io": true, 00:07:50.874 "nvme_io_md": false, 00:07:50.874 "write_zeroes": true, 00:07:50.874 "zcopy": false, 00:07:50.874 "get_zone_info": false, 00:07:50.874 "zone_management": false, 00:07:50.874 "zone_append": false, 00:07:50.874 "compare": true, 00:07:50.874 "compare_and_write": true, 00:07:50.874 "abort": true, 00:07:50.874 "seek_hole": false, 00:07:50.874 "seek_data": false, 00:07:50.874 "copy": true, 00:07:50.874 "nvme_iov_md": false 00:07:50.874 }, 00:07:50.874 "memory_domains": [ 00:07:50.874 { 00:07:50.874 "dma_device_id": "system", 00:07:50.874 "dma_device_type": 1 00:07:50.874 } 00:07:50.874 ], 00:07:50.874 "driver_specific": { 00:07:50.874 "nvme": [ 00:07:50.874 { 00:07:50.874 "trid": { 00:07:50.874 "trtype": "TCP", 00:07:50.874 "adrfam": "IPv4", 00:07:50.874 "traddr": "10.0.0.2", 00:07:50.874 "trsvcid": "4420", 00:07:50.874 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:50.874 }, 00:07:50.874 "ctrlr_data": { 00:07:50.874 "cntlid": 1, 00:07:50.874 "vendor_id": "0x8086", 00:07:50.874 "model_number": "SPDK bdev Controller", 00:07:50.874 "serial_number": "SPDK0", 00:07:50.874 "firmware_revision": "24.09", 00:07:50.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.874 "oacs": { 00:07:50.874 "security": 0, 00:07:50.874 "format": 0, 00:07:50.874 "firmware": 0, 00:07:50.874 "ns_manage": 0 00:07:50.874 }, 00:07:50.874 "multi_ctrlr": true, 00:07:50.874 "ana_reporting": false 00:07:50.874 }, 00:07:50.874 "vs": { 00:07:50.874 "nvme_version": "1.3" 00:07:50.874 }, 00:07:50.874 "ns_data": { 00:07:50.874 "id": 1, 00:07:50.874 "can_share": true 00:07:50.874 } 00:07:50.874 } 00:07:50.874 ], 00:07:50.874 "mp_policy": "active_passive" 00:07:50.874 } 00:07:50.874 } 00:07:50.874 ] 00:07:50.874 20:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:50.874 20:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65951 00:07:50.874 20:44:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:50.874 Running I/O for 10 seconds... 00:07:51.825 Latency(us) 00:07:51.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.825 Nvme0n1 : 1.00 10160.00 39.69 0.00 0.00 0.00 0.00 0.00 00:07:51.825 =================================================================================================================== 00:07:51.825 Total : 10160.00 39.69 0.00 0.00 0.00 0.00 0.00 00:07:51.825 00:07:52.760 20:44:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:07:52.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.760 Nvme0n1 : 2.00 10437.00 40.77 0.00 0.00 0.00 0.00 0.00 00:07:52.760 =================================================================================================================== 00:07:52.760 Total : 10437.00 40.77 0.00 0.00 0.00 0.00 0.00 00:07:52.760 00:07:53.019 true 00:07:53.019 20:44:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:07:53.019 20:44:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:53.277 20:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:53.277 20:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:53.277 20:44:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65951 00:07:53.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.917 Nvme0n1 : 3.00 10471.67 40.90 0.00 0.00 0.00 0.00 0.00 00:07:53.917 =================================================================================================================== 00:07:53.918 Total : 10471.67 40.90 0.00 0.00 0.00 0.00 0.00 00:07:53.918 00:07:54.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.852 Nvme0n1 : 4.00 10455.00 40.84 0.00 0.00 0.00 0.00 0.00 00:07:54.852 =================================================================================================================== 00:07:54.852 Total : 10455.00 40.84 0.00 0.00 0.00 0.00 0.00 00:07:54.852 00:07:55.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.788 Nvme0n1 : 5.00 10340.80 40.39 0.00 0.00 0.00 0.00 0.00 00:07:55.788 =================================================================================================================== 00:07:55.788 Total : 10340.80 40.39 0.00 0.00 0.00 0.00 0.00 00:07:55.788 00:07:57.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.168 Nvme0n1 : 6.00 10391.83 40.59 0.00 0.00 0.00 0.00 0.00 00:07:57.168 =================================================================================================================== 00:07:57.168 Total : 10391.83 40.59 0.00 0.00 0.00 0.00 0.00 00:07:57.168 00:07:57.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.796 Nvme0n1 : 7.00 9969.14 38.94 0.00 0.00 0.00 0.00 0.00 00:07:57.796 =================================================================================================================== 00:07:57.796 Total : 9969.14 38.94 0.00 0.00 0.00 0.00 0.00 00:07:57.796 00:07:59.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.173 Nvme0n1 : 8.00 9885.62 38.62 0.00 0.00 0.00 0.00 0.00 00:07:59.173 =================================================================================================================== 00:07:59.173 Total : 9885.62 38.62 0.00 0.00 0.00 0.00 0.00 00:07:59.173 00:08:00.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.109 Nvme0n1 : 9.00 9941.56 38.83 0.00 0.00 0.00 0.00 0.00 00:08:00.109 =================================================================================================================== 00:08:00.109 Total : 9941.56 38.83 0.00 0.00 0.00 0.00 0.00 00:08:00.109 00:08:01.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.046 Nvme0n1 : 10.00 9876.00 38.58 0.00 0.00 0.00 0.00 0.00 00:08:01.046 =================================================================================================================== 00:08:01.046 Total : 9876.00 38.58 0.00 0.00 0.00 0.00 0.00 00:08:01.046 00:08:01.046 00:08:01.046 Latency(us) 00:08:01.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.046 Nvme0n1 : 10.01 9881.76 38.60 0.00 0.00 12949.07 6579.92 421114.86 00:08:01.046 =================================================================================================================== 00:08:01.046 Total : 9881.76 38.60 0.00 0.00 12949.07 6579.92 421114.86 00:08:01.046 0 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65933 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 65933 ']' 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 65933 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65933 00:08:01.046 killing process with pid 65933 00:08:01.046 Received shutdown signal, test time was about 10.000000 seconds 00:08:01.046 00:08:01.046 Latency(us) 00:08:01.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.046 =================================================================================================================== 00:08:01.046 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65933' 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 65933 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 65933 00:08:01.046 20:44:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:01.305 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:01.562 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:08:01.562 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65609 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65609 00:08:01.821 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65609 Killed "${NVMF_APP[@]}" "$@" 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66084 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66084 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66084 ']' 00:08:01.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.821 20:44:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:01.821 [2024-07-15 20:44:23.572663] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:01.821 [2024-07-15 20:44:23.572730] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.821 [2024-07-15 20:44:23.702712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.079 [2024-07-15 20:44:23.787032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.079 [2024-07-15 20:44:23.787295] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.079 [2024-07-15 20:44:23.787422] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.079 [2024-07-15 20:44:23.787469] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.079 [2024-07-15 20:44:23.787494] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.079 [2024-07-15 20:44:23.787544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.079 [2024-07-15 20:44:23.828815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:02.646 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.646 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:02.646 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.646 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.646 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:02.646 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.646 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.905 [2024-07-15 20:44:24.636780] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:02.905 [2024-07-15 20:44:24.637313] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:02.905 [2024-07-15 20:44:24.637634] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:02.905 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:02.905 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 15f64ce6-634a-4e7c-b24d-c61f9409f046 00:08:02.905 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=15f64ce6-634a-4e7c-b24d-c61f9409f046 00:08:02.905 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:02.905 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:02.905 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:02.905 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:02.905 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:03.163 20:44:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 15f64ce6-634a-4e7c-b24d-c61f9409f046 -t 2000 00:08:03.163 [ 00:08:03.163 { 00:08:03.163 "name": "15f64ce6-634a-4e7c-b24d-c61f9409f046", 00:08:03.163 "aliases": [ 00:08:03.163 "lvs/lvol" 00:08:03.163 ], 00:08:03.163 "product_name": "Logical Volume", 00:08:03.163 "block_size": 4096, 00:08:03.163 "num_blocks": 38912, 00:08:03.163 "uuid": "15f64ce6-634a-4e7c-b24d-c61f9409f046", 00:08:03.163 "assigned_rate_limits": { 00:08:03.163 "rw_ios_per_sec": 0, 00:08:03.163 "rw_mbytes_per_sec": 0, 00:08:03.163 "r_mbytes_per_sec": 0, 00:08:03.163 "w_mbytes_per_sec": 0 00:08:03.163 }, 00:08:03.163 "claimed": false, 00:08:03.163 "zoned": false, 00:08:03.163 "supported_io_types": { 00:08:03.163 "read": true, 00:08:03.163 "write": true, 00:08:03.163 "unmap": true, 00:08:03.163 "flush": false, 00:08:03.163 "reset": true, 00:08:03.163 "nvme_admin": false, 00:08:03.163 "nvme_io": false, 00:08:03.163 "nvme_io_md": false, 00:08:03.163 "write_zeroes": true, 00:08:03.163 "zcopy": false, 00:08:03.163 "get_zone_info": false, 00:08:03.163 "zone_management": false, 00:08:03.163 "zone_append": false, 00:08:03.163 "compare": false, 00:08:03.163 "compare_and_write": false, 00:08:03.163 "abort": false, 00:08:03.163 "seek_hole": true, 00:08:03.163 "seek_data": true, 00:08:03.163 "copy": false, 00:08:03.163 "nvme_iov_md": false 00:08:03.163 }, 00:08:03.163 "driver_specific": { 00:08:03.163 "lvol": { 00:08:03.163 "lvol_store_uuid": "aedf418a-80c6-42d6-be99-e0d25ddb65a9", 00:08:03.163 "base_bdev": "aio_bdev", 00:08:03.163 "thin_provision": false, 00:08:03.163 "num_allocated_clusters": 38, 00:08:03.163 "snapshot": false, 00:08:03.163 "clone": false, 00:08:03.163 "esnap_clone": false 00:08:03.163 } 00:08:03.163 } 00:08:03.163 } 00:08:03.163 ] 00:08:03.163 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:03.163 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:08:03.163 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:03.422 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:03.422 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:08:03.422 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:03.680 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:03.680 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:03.938 [2024-07-15 20:44:25.604704] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:08:03.938 request: 00:08:03.938 { 00:08:03.938 "uuid": "aedf418a-80c6-42d6-be99-e0d25ddb65a9", 00:08:03.938 "method": "bdev_lvol_get_lvstores", 00:08:03.938 "req_id": 1 00:08:03.938 } 00:08:03.938 Got JSON-RPC error response 00:08:03.938 response: 00:08:03.938 { 00:08:03.938 "code": -19, 00:08:03.938 "message": "No such device" 00:08:03.938 } 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:03.938 20:44:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.196 aio_bdev 00:08:04.196 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 15f64ce6-634a-4e7c-b24d-c61f9409f046 00:08:04.196 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=15f64ce6-634a-4e7c-b24d-c61f9409f046 00:08:04.196 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:04.196 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:04.196 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:04.196 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:04.196 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:04.454 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 15f64ce6-634a-4e7c-b24d-c61f9409f046 -t 2000 00:08:04.713 [ 00:08:04.713 { 00:08:04.713 "name": "15f64ce6-634a-4e7c-b24d-c61f9409f046", 00:08:04.713 "aliases": [ 00:08:04.713 "lvs/lvol" 00:08:04.713 ], 00:08:04.713 "product_name": "Logical Volume", 00:08:04.713 "block_size": 4096, 00:08:04.713 "num_blocks": 38912, 00:08:04.713 "uuid": "15f64ce6-634a-4e7c-b24d-c61f9409f046", 00:08:04.713 "assigned_rate_limits": { 00:08:04.713 "rw_ios_per_sec": 0, 00:08:04.713 "rw_mbytes_per_sec": 0, 00:08:04.713 "r_mbytes_per_sec": 0, 00:08:04.713 "w_mbytes_per_sec": 0 00:08:04.713 }, 00:08:04.713 "claimed": false, 00:08:04.713 "zoned": false, 00:08:04.713 "supported_io_types": { 00:08:04.713 "read": true, 00:08:04.713 "write": true, 00:08:04.713 "unmap": true, 00:08:04.713 "flush": false, 00:08:04.713 "reset": true, 00:08:04.713 "nvme_admin": false, 00:08:04.713 "nvme_io": false, 00:08:04.713 "nvme_io_md": false, 00:08:04.713 "write_zeroes": true, 00:08:04.713 "zcopy": false, 00:08:04.713 "get_zone_info": false, 00:08:04.713 "zone_management": false, 00:08:04.713 "zone_append": false, 00:08:04.713 "compare": false, 00:08:04.713 "compare_and_write": false, 00:08:04.713 "abort": false, 00:08:04.713 "seek_hole": true, 00:08:04.713 "seek_data": true, 00:08:04.713 "copy": false, 00:08:04.713 "nvme_iov_md": false 00:08:04.713 }, 00:08:04.713 "driver_specific": { 00:08:04.713 "lvol": { 00:08:04.713 "lvol_store_uuid": "aedf418a-80c6-42d6-be99-e0d25ddb65a9", 00:08:04.713 "base_bdev": "aio_bdev", 00:08:04.713 "thin_provision": false, 00:08:04.713 "num_allocated_clusters": 38, 00:08:04.713 "snapshot": false, 00:08:04.713 "clone": false, 00:08:04.713 "esnap_clone": false 00:08:04.713 } 00:08:04.713 } 00:08:04.713 } 00:08:04.713 ] 00:08:04.713 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:04.713 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:08:04.713 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:04.713 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:04.713 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:08:04.713 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:04.970 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:04.970 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 15f64ce6-634a-4e7c-b24d-c61f9409f046 00:08:05.227 20:44:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aedf418a-80c6-42d6-be99-e0d25ddb65a9 00:08:05.485 20:44:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:05.485 20:44:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:05.743 ************************************ 00:08:05.743 END TEST lvs_grow_dirty 00:08:05.743 ************************************ 00:08:05.743 00:08:05.743 real 0m18.185s 00:08:05.743 user 0m36.432s 00:08:05.743 sys 0m7.502s 00:08:05.743 20:44:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.743 20:44:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:06.002 nvmf_trace.0 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:06.002 20:44:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:06.002 rmmod nvme_tcp 00:08:06.002 rmmod nvme_fabrics 00:08:06.260 rmmod nvme_keyring 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66084 ']' 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66084 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66084 ']' 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66084 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66084 00:08:06.260 killing process with pid 66084 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66084' 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66084 00:08:06.260 20:44:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66084 00:08:06.260 20:44:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:06.260 20:44:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:06.260 20:44:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:06.260 20:44:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.260 20:44:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.260 20:44:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.260 20:44:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.260 20:44:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.519 20:44:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:06.519 ************************************ 00:08:06.519 END TEST nvmf_lvs_grow 00:08:06.519 ************************************ 00:08:06.519 00:08:06.519 real 0m36.956s 00:08:06.519 user 0m56.077s 00:08:06.519 sys 0m11.267s 00:08:06.519 20:44:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.519 20:44:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.519 20:44:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:06.519 20:44:28 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:06.519 20:44:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:06.519 20:44:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.519 20:44:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.519 ************************************ 00:08:06.519 START TEST nvmf_bdev_io_wait 00:08:06.519 ************************************ 00:08:06.519 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:06.520 * Looking for test storage... 00:08:06.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.520 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:06.779 Cannot find device "nvmf_tgt_br" 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:06.779 Cannot find device "nvmf_tgt_br2" 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:06.779 Cannot find device "nvmf_tgt_br" 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:06.779 Cannot find device "nvmf_tgt_br2" 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:06.779 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:06.780 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:07.038 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:07.038 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:07.038 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:07.038 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:07.038 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:07.038 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:07.038 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:07.038 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:07.038 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:07.038 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:07.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:08:07.039 00:08:07.039 --- 10.0.0.2 ping statistics --- 00:08:07.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.039 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:07.039 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:07.039 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:08:07.039 00:08:07.039 --- 10.0.0.3 ping statistics --- 00:08:07.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.039 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:07.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:08:07.039 00:08:07.039 --- 10.0.0.1 ping statistics --- 00:08:07.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.039 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66384 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66384 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66384 ']' 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.039 20:44:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:07.039 [2024-07-15 20:44:28.899434] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:07.039 [2024-07-15 20:44:28.899496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.297 [2024-07-15 20:44:29.042421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.297 [2024-07-15 20:44:29.131594] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.297 [2024-07-15 20:44:29.131791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.297 [2024-07-15 20:44:29.131845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.297 [2024-07-15 20:44:29.131898] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.297 [2024-07-15 20:44:29.131939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.297 [2024-07-15 20:44:29.132219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.297 [2024-07-15 20:44:29.132323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.297 [2024-07-15 20:44:29.133314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.297 [2024-07-15 20:44:29.133315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.865 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.865 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:07.865 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.865 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.865 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.125 [2024-07-15 20:44:29.854371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.125 [2024-07-15 20:44:29.869444] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.125 Malloc0 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.125 [2024-07-15 20:44:29.936739] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66419 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66421 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:08.125 { 00:08:08.125 "params": { 00:08:08.125 "name": "Nvme$subsystem", 00:08:08.125 "trtype": "$TEST_TRANSPORT", 00:08:08.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.125 "adrfam": "ipv4", 00:08:08.125 "trsvcid": "$NVMF_PORT", 00:08:08.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.125 "hdgst": ${hdgst:-false}, 00:08:08.125 "ddgst": ${ddgst:-false} 00:08:08.125 }, 00:08:08.125 "method": "bdev_nvme_attach_controller" 00:08:08.125 } 00:08:08.125 EOF 00:08:08.125 )") 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:08.125 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66423 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:08.126 { 00:08:08.126 "params": { 00:08:08.126 "name": "Nvme$subsystem", 00:08:08.126 "trtype": "$TEST_TRANSPORT", 00:08:08.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.126 "adrfam": "ipv4", 00:08:08.126 "trsvcid": "$NVMF_PORT", 00:08:08.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.126 "hdgst": ${hdgst:-false}, 00:08:08.126 "ddgst": ${ddgst:-false} 00:08:08.126 }, 00:08:08.126 "method": "bdev_nvme_attach_controller" 00:08:08.126 } 00:08:08.126 EOF 00:08:08.126 )") 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:08.126 { 00:08:08.126 "params": { 00:08:08.126 "name": "Nvme$subsystem", 00:08:08.126 "trtype": "$TEST_TRANSPORT", 00:08:08.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.126 "adrfam": "ipv4", 00:08:08.126 "trsvcid": "$NVMF_PORT", 00:08:08.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.126 "hdgst": ${hdgst:-false}, 00:08:08.126 "ddgst": ${ddgst:-false} 00:08:08.126 }, 00:08:08.126 "method": "bdev_nvme_attach_controller" 00:08:08.126 } 00:08:08.126 EOF 00:08:08.126 )") 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66426 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:08.126 { 00:08:08.126 "params": { 00:08:08.126 "name": "Nvme$subsystem", 00:08:08.126 "trtype": "$TEST_TRANSPORT", 00:08:08.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.126 "adrfam": "ipv4", 00:08:08.126 "trsvcid": "$NVMF_PORT", 00:08:08.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.126 "hdgst": ${hdgst:-false}, 00:08:08.126 "ddgst": ${ddgst:-false} 00:08:08.126 }, 00:08:08.126 "method": "bdev_nvme_attach_controller" 00:08:08.126 } 00:08:08.126 EOF 00:08:08.126 )") 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:08.126 "params": { 00:08:08.126 "name": "Nvme1", 00:08:08.126 "trtype": "tcp", 00:08:08.126 "traddr": "10.0.0.2", 00:08:08.126 "adrfam": "ipv4", 00:08:08.126 "trsvcid": "4420", 00:08:08.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:08.126 "hdgst": false, 00:08:08.126 "ddgst": false 00:08:08.126 }, 00:08:08.126 "method": "bdev_nvme_attach_controller" 00:08:08.126 }' 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:08.126 "params": { 00:08:08.126 "name": "Nvme1", 00:08:08.126 "trtype": "tcp", 00:08:08.126 "traddr": "10.0.0.2", 00:08:08.126 "adrfam": "ipv4", 00:08:08.126 "trsvcid": "4420", 00:08:08.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:08.126 "hdgst": false, 00:08:08.126 "ddgst": false 00:08:08.126 }, 00:08:08.126 "method": "bdev_nvme_attach_controller" 00:08:08.126 }' 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:08.126 "params": { 00:08:08.126 "name": "Nvme1", 00:08:08.126 "trtype": "tcp", 00:08:08.126 "traddr": "10.0.0.2", 00:08:08.126 "adrfam": "ipv4", 00:08:08.126 "trsvcid": "4420", 00:08:08.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:08.126 "hdgst": false, 00:08:08.126 "ddgst": false 00:08:08.126 }, 00:08:08.126 "method": "bdev_nvme_attach_controller" 00:08:08.126 }' 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:08.126 20:44:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:08.126 "params": { 00:08:08.126 "name": "Nvme1", 00:08:08.126 "trtype": "tcp", 00:08:08.126 "traddr": "10.0.0.2", 00:08:08.126 "adrfam": "ipv4", 00:08:08.126 "trsvcid": "4420", 00:08:08.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:08.126 "hdgst": false, 00:08:08.126 "ddgst": false 00:08:08.126 }, 00:08:08.126 "method": "bdev_nvme_attach_controller" 00:08:08.126 }' 00:08:08.126 [2024-07-15 20:44:29.999435] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:08.127 [2024-07-15 20:44:29.999607] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:08.127 20:44:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66419 00:08:08.127 [2024-07-15 20:44:30.008740] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:08.127 [2024-07-15 20:44:30.008905] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:08.127 [2024-07-15 20:44:30.011674] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:08.127 [2024-07-15 20:44:30.011729] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:08.127 [2024-07-15 20:44:30.026108] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:08.127 [2024-07-15 20:44:30.026389] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:08.386 [2024-07-15 20:44:30.179196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.386 [2024-07-15 20:44:30.234187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.386 [2024-07-15 20:44:30.262288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:08.647 [2024-07-15 20:44:30.305667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.647 [2024-07-15 20:44:30.307138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.647 [2024-07-15 20:44:30.331695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:08.647 [2024-07-15 20:44:30.375368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.647 [2024-07-15 20:44:30.378587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.647 [2024-07-15 20:44:30.384893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:08.647 Running I/O for 1 seconds... 00:08:08.647 [2024-07-15 20:44:30.422064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.647 [2024-07-15 20:44:30.457243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:08.647 Running I/O for 1 seconds... 00:08:08.647 [2024-07-15 20:44:30.493709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.647 Running I/O for 1 seconds... 00:08:08.937 Running I/O for 1 seconds... 00:08:09.506 00:08:09.506 Latency(us) 00:08:09.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.506 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:09.506 Nvme1n1 : 1.00 222366.62 868.62 0.00 0.00 573.12 269.78 2237.17 00:08:09.506 =================================================================================================================== 00:08:09.506 Total : 222366.62 868.62 0.00 0.00 573.12 269.78 2237.17 00:08:09.765 00:08:09.765 Latency(us) 00:08:09.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.765 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:09.765 Nvme1n1 : 1.01 13240.50 51.72 0.00 0.00 9636.86 5632.41 16739.32 00:08:09.765 =================================================================================================================== 00:08:09.765 Total : 13240.50 51.72 0.00 0.00 9636.86 5632.41 16739.32 00:08:09.765 00:08:09.765 Latency(us) 00:08:09.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.765 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:09.765 Nvme1n1 : 1.01 9498.18 37.10 0.00 0.00 13420.80 1256.76 17160.43 00:08:09.765 =================================================================================================================== 00:08:09.765 Total : 9498.18 37.10 0.00 0.00 13420.80 1256.76 17160.43 00:08:09.765 00:08:09.765 Latency(us) 00:08:09.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.765 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:09.765 Nvme1n1 : 1.01 9294.55 36.31 0.00 0.00 13719.58 4790.18 19266.00 00:08:09.765 =================================================================================================================== 00:08:09.765 Total : 9294.55 36.31 0.00 0.00 13719.58 4790.18 19266.00 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66421 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66423 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66426 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.024 rmmod nvme_tcp 00:08:10.024 rmmod nvme_fabrics 00:08:10.024 rmmod nvme_keyring 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66384 ']' 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66384 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66384 ']' 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66384 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:10.024 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66384 00:08:10.283 killing process with pid 66384 00:08:10.283 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:10.283 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:10.283 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66384' 00:08:10.283 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66384 00:08:10.283 20:44:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66384 00:08:10.283 20:44:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.283 20:44:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.283 20:44:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.283 20:44:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.283 20:44:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.283 20:44:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.283 20:44:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.283 20:44:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.283 20:44:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:10.283 00:08:10.283 real 0m3.899s 00:08:10.283 user 0m16.424s 00:08:10.283 sys 0m2.228s 00:08:10.283 20:44:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.283 ************************************ 00:08:10.283 20:44:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.283 END TEST nvmf_bdev_io_wait 00:08:10.283 ************************************ 00:08:10.543 20:44:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:10.543 20:44:32 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:10.543 20:44:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.543 20:44:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.543 20:44:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.543 ************************************ 00:08:10.543 START TEST nvmf_queue_depth 00:08:10.543 ************************************ 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:10.543 * Looking for test storage... 00:08:10.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:10.543 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:10.802 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:10.802 Cannot find device "nvmf_tgt_br" 00:08:10.802 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:10.803 Cannot find device "nvmf_tgt_br2" 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:10.803 Cannot find device "nvmf_tgt_br" 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:10.803 Cannot find device "nvmf_tgt_br2" 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:10.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:10.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:10.803 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:11.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:08:11.062 00:08:11.062 --- 10.0.0.2 ping statistics --- 00:08:11.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.062 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:11.062 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.062 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.026 ms 00:08:11.062 00:08:11.062 --- 10.0.0.3 ping statistics --- 00:08:11.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.062 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:08:11.062 00:08:11.062 --- 10.0.0.1 ping statistics --- 00:08:11.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.062 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66657 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66657 00:08:11.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66657 ']' 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.062 20:44:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.063 20:44:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:11.063 [2024-07-15 20:44:32.909430] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:11.063 [2024-07-15 20:44:32.909890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.322 [2024-07-15 20:44:33.052246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.322 [2024-07-15 20:44:33.122681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.322 [2024-07-15 20:44:33.123074] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.322 [2024-07-15 20:44:33.123360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.322 [2024-07-15 20:44:33.123524] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.322 [2024-07-15 20:44:33.123744] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.322 [2024-07-15 20:44:33.123872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.322 [2024-07-15 20:44:33.164575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:11.888 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.888 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:11.888 20:44:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:11.888 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.888 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 [2024-07-15 20:44:33.809708] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 Malloc0 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 [2024-07-15 20:44:33.884558] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66691 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66691 /var/tmp/bdevperf.sock 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66691 ']' 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.146 20:44:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 [2024-07-15 20:44:33.939882] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:12.146 [2024-07-15 20:44:33.940120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66691 ] 00:08:12.404 [2024-07-15 20:44:34.080860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.404 [2024-07-15 20:44:34.162551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.404 [2024-07-15 20:44:34.204239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.971 20:44:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.971 20:44:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:12.971 20:44:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:12.971 20:44:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.971 20:44:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:12.971 NVMe0n1 00:08:12.971 20:44:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.971 20:44:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:13.229 Running I/O for 10 seconds... 00:08:23.206 00:08:23.206 Latency(us) 00:08:23.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.206 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:23.206 Verification LBA range: start 0x0 length 0x4000 00:08:23.206 NVMe0n1 : 10.08 11179.09 43.67 0.00 0.00 91262.07 18844.89 69483.95 00:08:23.206 =================================================================================================================== 00:08:23.206 Total : 11179.09 43.67 0.00 0.00 91262.07 18844.89 69483.95 00:08:23.206 0 00:08:23.206 20:44:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66691 00:08:23.206 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66691 ']' 00:08:23.206 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66691 00:08:23.206 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:23.206 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:23.206 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66691 00:08:23.206 killing process with pid 66691 00:08:23.206 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.206 00:08:23.206 Latency(us) 00:08:23.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.206 =================================================================================================================== 00:08:23.206 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.206 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:23.206 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:23.206 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66691' 00:08:23.206 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66691 00:08:23.206 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66691 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.465 rmmod nvme_tcp 00:08:23.465 rmmod nvme_fabrics 00:08:23.465 rmmod nvme_keyring 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66657 ']' 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66657 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66657 ']' 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66657 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:23.465 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66657 00:08:23.724 killing process with pid 66657 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66657' 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66657 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66657 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.724 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.983 20:44:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:23.983 00:08:23.983 real 0m13.396s 00:08:23.983 user 0m22.658s 00:08:23.983 sys 0m2.465s 00:08:23.983 ************************************ 00:08:23.983 END TEST nvmf_queue_depth 00:08:23.983 ************************************ 00:08:23.983 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.983 20:44:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.983 20:44:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:23.983 20:44:45 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:23.983 20:44:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:23.983 20:44:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.983 20:44:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.983 ************************************ 00:08:23.983 START TEST nvmf_target_multipath 00:08:23.983 ************************************ 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:23.983 * Looking for test storage... 00:08:23.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.983 20:44:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:24.241 Cannot find device "nvmf_tgt_br" 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:24.241 Cannot find device "nvmf_tgt_br2" 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:24.241 Cannot find device "nvmf_tgt_br" 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:24.241 Cannot find device "nvmf_tgt_br2" 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:24.241 20:44:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:24.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:24.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:24.241 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:24.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:08:24.499 00:08:24.499 --- 10.0.0.2 ping statistics --- 00:08:24.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.499 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:24.499 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:24.499 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:08:24.499 00:08:24.499 --- 10.0.0.3 ping statistics --- 00:08:24.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.499 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:24.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:24.499 00:08:24.499 --- 10.0.0.1 ping statistics --- 00:08:24.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.499 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67011 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67011 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67011 ']' 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.499 20:44:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:24.499 [2024-07-15 20:44:46.377496] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:24.499 [2024-07-15 20:44:46.377560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.756 [2024-07-15 20:44:46.524082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.756 [2024-07-15 20:44:46.606751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.756 [2024-07-15 20:44:46.606969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.756 [2024-07-15 20:44:46.607069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.756 [2024-07-15 20:44:46.607115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.756 [2024-07-15 20:44:46.607123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.756 [2024-07-15 20:44:46.607329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.756 [2024-07-15 20:44:46.607390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.756 [2024-07-15 20:44:46.607762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.756 [2024-07-15 20:44:46.608297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.756 [2024-07-15 20:44:46.649194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:25.321 20:44:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.321 20:44:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:08:25.321 20:44:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.321 20:44:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.321 20:44:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:25.580 20:44:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.580 20:44:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:25.838 [2024-07-15 20:44:47.508334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.838 20:44:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:25.838 Malloc0 00:08:25.838 20:44:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:26.096 20:44:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.354 20:44:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.611 [2024-07-15 20:44:48.268458] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.611 20:44:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:26.611 [2024-07-15 20:44:48.452316] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:26.611 20:44:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid=69e37e11-dc2b-47bc-a2e9-49065053d84e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:26.869 20:44:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid=69e37e11-dc2b-47bc-a2e9-49065053d84e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:26.869 20:44:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:26.869 20:44:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:26.869 20:44:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:26.869 20:44:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:26.869 20:44:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:29.413 20:44:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:29.413 20:44:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:29.413 20:44:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.413 20:44:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:29.413 20:44:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.413 20:44:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:29.413 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:29.413 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67095 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:29.414 20:44:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:29.414 [global] 00:08:29.414 thread=1 00:08:29.414 invalidate=1 00:08:29.414 rw=randrw 00:08:29.414 time_based=1 00:08:29.414 runtime=6 00:08:29.414 ioengine=libaio 00:08:29.414 direct=1 00:08:29.414 bs=4096 00:08:29.414 iodepth=128 00:08:29.414 norandommap=0 00:08:29.414 numjobs=1 00:08:29.414 00:08:29.414 verify_dump=1 00:08:29.414 verify_backlog=512 00:08:29.414 verify_state_save=0 00:08:29.414 do_verify=1 00:08:29.414 verify=crc32c-intel 00:08:29.414 [job0] 00:08:29.414 filename=/dev/nvme0n1 00:08:29.414 Could not set queue depth (nvme0n1) 00:08:29.414 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:29.414 fio-3.35 00:08:29.414 Starting 1 thread 00:08:29.978 20:44:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:30.236 20:44:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:30.493 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:30.751 20:44:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67095 00:08:36.024 00:08:36.024 job0: (groupid=0, jobs=1): err= 0: pid=67127: Mon Jul 15 20:44:57 2024 00:08:36.024 read: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(321MiB/6000msec) 00:08:36.024 slat (usec): min=3, max=5763, avg=39.91, stdev=146.40 00:08:36.024 clat (usec): min=1286, max=13899, avg=6409.37, stdev=1148.57 00:08:36.024 lat (usec): min=1306, max=13920, avg=6449.28, stdev=1154.13 00:08:36.024 clat percentiles (usec): 00:08:36.024 | 1.00th=[ 3785], 5.00th=[ 4555], 10.00th=[ 5276], 20.00th=[ 5866], 00:08:36.024 | 30.00th=[ 6063], 40.00th=[ 6259], 50.00th=[ 6325], 60.00th=[ 6456], 00:08:36.024 | 70.00th=[ 6587], 80.00th=[ 6849], 90.00th=[ 7308], 95.00th=[ 9241], 00:08:36.024 | 99.00th=[10028], 99.50th=[10290], 99.90th=[11600], 99.95th=[11863], 00:08:36.024 | 99.99th=[13304] 00:08:36.024 bw ( KiB/s): min=15072, max=34328, per=50.58%, avg=27717.09, stdev=7593.94, samples=11 00:08:36.024 iops : min= 3768, max= 8582, avg=6929.27, stdev=1898.49, samples=11 00:08:36.024 write: IOPS=8104, BW=31.7MiB/s (33.2MB/s)(165MiB/5218msec); 0 zone resets 00:08:36.024 slat (usec): min=4, max=4963, avg=52.27, stdev=94.28 00:08:36.024 clat (usec): min=658, max=12030, avg=5514.56, stdev=969.07 00:08:36.024 lat (usec): min=715, max=12061, avg=5566.84, stdev=971.03 00:08:36.024 clat percentiles (usec): 00:08:36.024 | 1.00th=[ 3294], 5.00th=[ 3851], 10.00th=[ 4228], 20.00th=[ 4817], 00:08:36.024 | 30.00th=[ 5276], 40.00th=[ 5473], 50.00th=[ 5604], 60.00th=[ 5800], 00:08:36.024 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6325], 95.00th=[ 6652], 00:08:36.024 | 99.00th=[ 8848], 99.50th=[ 9372], 99.90th=[10159], 99.95th=[10552], 00:08:36.024 | 99.99th=[11076] 00:08:36.024 bw ( KiB/s): min=15952, max=33720, per=85.46%, avg=27703.27, stdev=7075.66, samples=11 00:08:36.024 iops : min= 3988, max= 8430, avg=6926.00, stdev=1768.62, samples=11 00:08:36.024 lat (usec) : 750=0.01% 00:08:36.024 lat (msec) : 2=0.14%, 4=3.33%, 10=95.74%, 20=0.78% 00:08:36.024 cpu : usr=7.40%, sys=32.56%, ctx=7548, majf=0, minf=108 00:08:36.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:36.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:36.024 issued rwts: total=82201,42289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:36.024 00:08:36.024 Run status group 0 (all jobs): 00:08:36.024 READ: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=321MiB (337MB), run=6000-6000msec 00:08:36.024 WRITE: bw=31.7MiB/s (33.2MB/s), 31.7MiB/s-31.7MiB/s (33.2MB/s-33.2MB/s), io=165MiB (173MB), run=5218-5218msec 00:08:36.024 00:08:36.024 Disk stats (read/write): 00:08:36.024 nvme0n1: ios=81158/41447, merge=0/0, ticks=473657/197816, in_queue=671473, util=98.56% 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67204 00:08:36.024 20:44:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:36.024 [global] 00:08:36.024 thread=1 00:08:36.024 invalidate=1 00:08:36.024 rw=randrw 00:08:36.024 time_based=1 00:08:36.024 runtime=6 00:08:36.024 ioengine=libaio 00:08:36.024 direct=1 00:08:36.024 bs=4096 00:08:36.024 iodepth=128 00:08:36.024 norandommap=0 00:08:36.024 numjobs=1 00:08:36.024 00:08:36.024 verify_dump=1 00:08:36.024 verify_backlog=512 00:08:36.024 verify_state_save=0 00:08:36.024 do_verify=1 00:08:36.024 verify=crc32c-intel 00:08:36.024 [job0] 00:08:36.024 filename=/dev/nvme0n1 00:08:36.024 Could not set queue depth (nvme0n1) 00:08:36.024 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:36.024 fio-3.35 00:08:36.024 Starting 1 thread 00:08:36.957 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:36.957 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:37.215 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:37.215 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:37.215 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.216 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:37.216 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:37.216 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:37.216 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:37.216 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:37.216 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.216 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:37.216 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:37.216 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:37.216 20:44:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:37.216 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:37.473 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:37.473 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:37.473 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.473 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:37.473 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:37.474 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:37.474 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:37.474 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:37.474 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.474 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:37.474 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:37.474 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:37.474 20:44:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67204 00:08:42.744 00:08:42.744 job0: (groupid=0, jobs=1): err= 0: pid=67225: Mon Jul 15 20:45:03 2024 00:08:42.744 read: IOPS=14.6k, BW=57.0MiB/s (59.8MB/s)(342MiB/6001msec) 00:08:42.744 slat (usec): min=3, max=5784, avg=33.32, stdev=126.67 00:08:42.744 clat (usec): min=252, max=16668, avg=6031.82, stdev=1346.89 00:08:42.744 lat (usec): min=267, max=16703, avg=6065.14, stdev=1355.43 00:08:42.744 clat percentiles (usec): 00:08:42.744 | 1.00th=[ 2409], 5.00th=[ 3687], 10.00th=[ 4228], 20.00th=[ 5014], 00:08:42.744 | 30.00th=[ 5735], 40.00th=[ 5997], 50.00th=[ 6194], 60.00th=[ 6390], 00:08:42.744 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7177], 95.00th=[ 8455], 00:08:42.744 | 99.00th=[ 9896], 99.50th=[10159], 99.90th=[10552], 99.95th=[10945], 00:08:42.744 | 99.99th=[12780] 00:08:42.744 bw ( KiB/s): min=18336, max=42096, per=51.23%, avg=29906.00, stdev=8360.32, samples=11 00:08:42.744 iops : min= 4584, max=10524, avg=7476.55, stdev=2090.02, samples=11 00:08:42.744 write: IOPS=8826, BW=34.5MiB/s (36.2MB/s)(177MiB/5139msec); 0 zone resets 00:08:42.744 slat (usec): min=5, max=4230, avg=46.64, stdev=81.75 00:08:42.744 clat (usec): min=314, max=12669, avg=5115.82, stdev=1241.14 00:08:42.744 lat (usec): min=369, max=12698, avg=5162.46, stdev=1248.89 00:08:42.744 clat percentiles (usec): 00:08:42.744 | 1.00th=[ 2311], 5.00th=[ 2966], 10.00th=[ 3392], 20.00th=[ 3982], 00:08:42.744 | 30.00th=[ 4490], 40.00th=[ 5014], 50.00th=[ 5407], 60.00th=[ 5604], 00:08:42.744 | 70.00th=[ 5800], 80.00th=[ 5997], 90.00th=[ 6259], 95.00th=[ 6652], 00:08:42.744 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[10290], 99.95th=[11338], 00:08:42.744 | 99.99th=[12649] 00:08:42.744 bw ( KiB/s): min=19264, max=41568, per=84.83%, avg=29949.82, stdev=8024.72, samples=11 00:08:42.744 iops : min= 4816, max=10392, avg=7487.45, stdev=2006.18, samples=11 00:08:42.744 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:08:42.744 lat (msec) : 2=0.47%, 4=11.36%, 10=87.62%, 20=0.51% 00:08:42.744 cpu : usr=7.94%, sys=33.36%, ctx=8459, majf=0, minf=151 00:08:42.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:42.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:42.744 issued rwts: total=87577,45360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:42.744 00:08:42.744 Run status group 0 (all jobs): 00:08:42.744 READ: bw=57.0MiB/s (59.8MB/s), 57.0MiB/s-57.0MiB/s (59.8MB/s-59.8MB/s), io=342MiB (359MB), run=6001-6001msec 00:08:42.744 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=177MiB (186MB), run=5139-5139msec 00:08:42.744 00:08:42.744 Disk stats (read/write): 00:08:42.744 nvme0n1: ios=86528/44491, merge=0/0, ticks=472779/193609, in_queue=666388, util=98.68% 00:08:42.744 20:45:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:42.744 rmmod nvme_tcp 00:08:42.744 rmmod nvme_fabrics 00:08:42.744 rmmod nvme_keyring 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67011 ']' 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67011 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67011 ']' 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67011 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67011 00:08:42.744 killing process with pid 67011 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67011' 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67011 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67011 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.744 20:45:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:43.003 00:08:43.003 real 0m18.942s 00:08:43.003 user 1m9.055s 00:08:43.003 sys 0m11.602s 00:08:43.003 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.003 20:45:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:43.003 ************************************ 00:08:43.003 END TEST nvmf_target_multipath 00:08:43.003 ************************************ 00:08:43.003 20:45:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:43.003 20:45:04 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:43.003 20:45:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.003 20:45:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.003 20:45:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.003 ************************************ 00:08:43.003 START TEST nvmf_zcopy 00:08:43.003 ************************************ 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:43.003 * Looking for test storage... 00:08:43.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.003 20:45:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.004 20:45:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:43.262 Cannot find device "nvmf_tgt_br" 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.262 Cannot find device "nvmf_tgt_br2" 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:43.262 Cannot find device "nvmf_tgt_br" 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:08:43.262 20:45:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:43.262 Cannot find device "nvmf_tgt_br2" 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:43.262 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:43.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:08:43.519 00:08:43.519 --- 10.0.0.2 ping statistics --- 00:08:43.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.519 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:43.519 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:43.519 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:08:43.519 00:08:43.519 --- 10.0.0.3 ping statistics --- 00:08:43.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.519 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:43.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:08:43.519 00:08:43.519 --- 10.0.0.1 ping statistics --- 00:08:43.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.519 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:43.519 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67479 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67479 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67479 ']' 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.520 20:45:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.520 [2024-07-15 20:45:05.405133] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:43.520 [2024-07-15 20:45:05.405205] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.776 [2024-07-15 20:45:05.547559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.776 [2024-07-15 20:45:05.626411] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.776 [2024-07-15 20:45:05.626463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.776 [2024-07-15 20:45:05.626472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.776 [2024-07-15 20:45:05.626481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.776 [2024-07-15 20:45:05.626488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.776 [2024-07-15 20:45:05.626512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.776 [2024-07-15 20:45:05.667097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:44.369 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.369 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:08:44.369 20:45:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:44.369 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.369 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.627 [2024-07-15 20:45:06.299785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.627 [2024-07-15 20:45:06.323833] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.627 malloc0 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:44.627 { 00:08:44.627 "params": { 00:08:44.627 "name": "Nvme$subsystem", 00:08:44.627 "trtype": "$TEST_TRANSPORT", 00:08:44.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:44.627 "adrfam": "ipv4", 00:08:44.627 "trsvcid": "$NVMF_PORT", 00:08:44.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:44.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:44.627 "hdgst": ${hdgst:-false}, 00:08:44.627 "ddgst": ${ddgst:-false} 00:08:44.627 }, 00:08:44.627 "method": "bdev_nvme_attach_controller" 00:08:44.627 } 00:08:44.627 EOF 00:08:44.627 )") 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:44.627 20:45:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:44.627 "params": { 00:08:44.627 "name": "Nvme1", 00:08:44.627 "trtype": "tcp", 00:08:44.627 "traddr": "10.0.0.2", 00:08:44.627 "adrfam": "ipv4", 00:08:44.627 "trsvcid": "4420", 00:08:44.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:44.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:44.627 "hdgst": false, 00:08:44.627 "ddgst": false 00:08:44.627 }, 00:08:44.627 "method": "bdev_nvme_attach_controller" 00:08:44.627 }' 00:08:44.627 [2024-07-15 20:45:06.405923] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:44.627 [2024-07-15 20:45:06.405985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67512 ] 00:08:44.884 [2024-07-15 20:45:06.545554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.884 [2024-07-15 20:45:06.633375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.884 [2024-07-15 20:45:06.682997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:44.884 Running I/O for 10 seconds... 00:08:54.915 00:08:54.915 Latency(us) 00:08:54.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.915 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:54.915 Verification LBA range: start 0x0 length 0x1000 00:08:54.915 Nvme1n1 : 10.01 8248.14 64.44 0.00 0.00 15475.78 2158.21 25898.56 00:08:54.915 =================================================================================================================== 00:08:54.915 Total : 8248.14 64.44 0.00 0.00 15475.78 2158.21 25898.56 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67623 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:55.173 { 00:08:55.173 "params": { 00:08:55.173 "name": "Nvme$subsystem", 00:08:55.173 "trtype": "$TEST_TRANSPORT", 00:08:55.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.173 "adrfam": "ipv4", 00:08:55.173 "trsvcid": "$NVMF_PORT", 00:08:55.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.173 "hdgst": ${hdgst:-false}, 00:08:55.173 "ddgst": ${ddgst:-false} 00:08:55.173 }, 00:08:55.173 "method": "bdev_nvme_attach_controller" 00:08:55.173 } 00:08:55.173 EOF 00:08:55.173 )") 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:55.173 [2024-07-15 20:45:16.981272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.173 [2024-07-15 20:45:16.981307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:55.173 20:45:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:55.173 "params": { 00:08:55.173 "name": "Nvme1", 00:08:55.173 "trtype": "tcp", 00:08:55.173 "traddr": "10.0.0.2", 00:08:55.173 "adrfam": "ipv4", 00:08:55.173 "trsvcid": "4420", 00:08:55.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.173 "hdgst": false, 00:08:55.173 "ddgst": false 00:08:55.173 }, 00:08:55.173 "method": "bdev_nvme_attach_controller" 00:08:55.173 }' 00:08:55.173 [2024-07-15 20:45:16.997236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.173 [2024-07-15 20:45:16.997255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.173 [2024-07-15 20:45:17.009226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.173 [2024-07-15 20:45:17.009244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.173 [2024-07-15 20:45:17.021227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.173 [2024-07-15 20:45:17.021246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.173 [2024-07-15 20:45:17.026374] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:08:55.173 [2024-07-15 20:45:17.026429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67623 ] 00:08:55.173 [2024-07-15 20:45:17.033228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.173 [2024-07-15 20:45:17.033246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.173 [2024-07-15 20:45:17.045227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.173 [2024-07-15 20:45:17.045243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.173 [2024-07-15 20:45:17.057227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.173 [2024-07-15 20:45:17.057244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.173 [2024-07-15 20:45:17.069208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.173 [2024-07-15 20:45:17.069226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.173 [2024-07-15 20:45:17.081201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.173 [2024-07-15 20:45:17.081221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.093169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.093195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.105153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.105179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.121133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.121151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.133114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.133134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.145098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.145117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.157081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.157099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.167477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.432 [2024-07-15 20:45:17.169065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.169085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.185047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.185075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.197026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.197045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.209009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.209031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.220992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.221012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.232974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.232993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.248950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.248971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.253642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.432 [2024-07-15 20:45:17.260935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.260954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.272923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.272946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.284906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.284929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.296888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.296910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.303028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:55.432 [2024-07-15 20:45:17.312866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.312894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.324847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.324866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.432 [2024-07-15 20:45:17.336829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.432 [2024-07-15 20:45:17.336848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.348838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.348868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.360821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.360846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.376814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.376843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.388795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.388820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.400788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.400817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 Running I/O for 5 seconds... 00:08:55.690 [2024-07-15 20:45:17.412771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.412794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.428658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.428692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.443572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.443603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.459101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.459133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.473911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.473942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.489573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.489605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.504634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.504664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.523600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.523633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.538380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.538411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.549002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.549033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.563770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.563801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.579694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.579726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.690 [2024-07-15 20:45:17.594190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.690 [2024-07-15 20:45:17.594220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.610258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.610289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.621001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.621033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.636305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.636336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.651522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.651550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.666480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.666512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.681902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.681936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.696348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.696379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.710846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.710879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.721592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.721622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.736204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.736236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.751648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.751679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.766311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.766342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.781858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.781891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.795728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.795760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.810455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.810487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.825848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.825880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.840073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.840108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.947 [2024-07-15 20:45:17.854960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.947 [2024-07-15 20:45:17.854994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:17.870242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:17.870280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:17.887736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:17.887768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:17.902539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:17.902573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:17.917721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:17.917753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:17.932517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:17.932548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:17.948216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:17.948248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:17.962907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:17.962940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:17.978672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:17.978705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:17.992936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:17.992968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:18.003644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:18.003673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:18.018428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:18.018460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:18.033946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:18.033980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:18.047934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:18.047965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:18.062425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:18.062459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:18.072926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:18.072953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:18.087758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:18.087788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.205 [2024-07-15 20:45:18.103479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.205 [2024-07-15 20:45:18.103508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.118056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.118084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.129316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.129343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.144206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.144233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.159875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.159908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.174646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.174677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.189783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.189814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.204665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.204695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.220144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.220184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.234859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.234891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.250459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.250491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.264873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.264904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.279101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.279135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.293339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.293369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.307503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.307535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.318062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.318090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.332962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.332995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.348511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.348540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.463 [2024-07-15 20:45:18.362459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.463 [2024-07-15 20:45:18.362493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.377387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.377417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.393254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.393284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.408201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.408231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.423703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.423734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.437958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.437989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.452315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.452349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.463539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.463573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.478411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.478443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.489180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.489213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.503670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.503703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.514374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.514407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.529409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.529439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.544818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.544848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.559644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.559678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.575417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.575449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.589357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.589388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.603853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.603886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.618299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.618331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.722 [2024-07-15 20:45:18.629144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.722 [2024-07-15 20:45:18.629185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.980 [2024-07-15 20:45:18.643793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.980 [2024-07-15 20:45:18.643826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.980 [2024-07-15 20:45:18.655069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.980 [2024-07-15 20:45:18.655102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.980 [2024-07-15 20:45:18.669550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.980 [2024-07-15 20:45:18.669580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.980 [2024-07-15 20:45:18.683924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.980 [2024-07-15 20:45:18.683956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.980 [2024-07-15 20:45:18.694650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.980 [2024-07-15 20:45:18.694682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.980 [2024-07-15 20:45:18.709752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.980 [2024-07-15 20:45:18.709782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.980 [2024-07-15 20:45:18.724944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.724977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.981 [2024-07-15 20:45:18.739791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.739824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.981 [2024-07-15 20:45:18.754946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.754980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.981 [2024-07-15 20:45:18.769409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.769441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.981 [2024-07-15 20:45:18.779967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.780000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.981 [2024-07-15 20:45:18.794677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.794712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.981 [2024-07-15 20:45:18.810330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.810360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.981 [2024-07-15 20:45:18.825293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.825325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.981 [2024-07-15 20:45:18.840571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.840603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.981 [2024-07-15 20:45:18.854439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.854472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.981 [2024-07-15 20:45:18.869357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.869389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.981 [2024-07-15 20:45:18.885153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.981 [2024-07-15 20:45:18.885198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:18.899338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:18.899370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:18.913541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:18.913571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:18.927831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:18.927864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:18.941917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:18.941950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:18.956185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:18.956216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:18.966859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:18.966892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:18.981759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:18.981790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:18.997421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:18.997455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:19.011807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:19.011840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:19.026433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:19.026465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:19.042356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:19.042388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:19.056603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:19.056634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:19.071025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:19.071057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:19.081694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:19.081727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:19.096915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:19.096945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:19.112277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:19.112306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:19.127311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:19.127343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.240 [2024-07-15 20:45:19.143115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.240 [2024-07-15 20:45:19.143150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.157359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.157389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.167980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.168014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.182835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.182868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.198134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.198173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.212797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.212827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.223383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.223411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.238232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.238260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.253848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.253879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.268289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.268319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.279142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.279181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.294088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.294120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.309493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.309524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.322945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.322977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.337635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.337665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.353527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.353559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.364393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.364423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.379098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.379132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.499 [2024-07-15 20:45:19.394451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.499 [2024-07-15 20:45:19.394485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.408853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.408883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.424534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.424567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.438647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.438679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.453359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.453389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.468917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.468949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.483399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.483432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.502662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.502695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.517796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.517827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.537190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.537221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.552082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.552115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.567299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.567331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.582006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.582038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.601596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.601628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.615735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.615768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.630437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.630468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.645732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.645765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.758 [2024-07-15 20:45:19.660547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.758 [2024-07-15 20:45:19.660578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.675657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.675688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.689834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.689865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.704234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.704260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.715103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.715133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.730365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.730396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.746053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.746084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.760737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.760768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.771463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.771496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.786457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.786489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.801525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.801557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.815341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.815373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.830083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.830115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.845336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.845370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.859962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.859992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.875610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.875642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.889621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.889654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.907143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.907189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.017 [2024-07-15 20:45:19.922153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.017 [2024-07-15 20:45:19.922193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.275 [2024-07-15 20:45:19.937906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.275 [2024-07-15 20:45:19.937939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.275 [2024-07-15 20:45:19.951646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.275 [2024-07-15 20:45:19.951677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.275 [2024-07-15 20:45:19.966614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.275 [2024-07-15 20:45:19.966645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.275 [2024-07-15 20:45:19.985995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.275 [2024-07-15 20:45:19.986035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.001051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.001082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.015600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.015631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.030738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.030772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.049678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.049710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.067534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.067565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.082428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.082457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.097488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.097520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.112229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.112260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.127621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.127654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.142015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.142047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.156519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.156550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.167243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.167272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.276 [2024-07-15 20:45:20.182271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.276 [2024-07-15 20:45:20.182302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.197987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.198027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.215526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.215558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.229328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.229358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.243834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.243866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.259475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.259508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.274109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.274137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.284680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.284710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.299469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.299497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.315239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.315268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.330103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.330132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.345693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.345724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.362661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.362693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.376879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.376911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.391806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.391839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.407285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.407317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.420943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.420973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.534 [2024-07-15 20:45:20.435566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.534 [2024-07-15 20:45:20.435599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.450924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.450955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.465772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.465801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.481283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.481314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.494894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.494929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.509838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.509869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.529526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.529558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.547037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.547073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.561856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.561885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.577143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.577184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.594660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.594693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.609378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.609409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.625512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.625542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.643236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.643279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.661095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.661128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.679200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.679234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.793 [2024-07-15 20:45:20.697099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.793 [2024-07-15 20:45:20.697133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-15 20:45:20.715070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-15 20:45:20.715103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-15 20:45:20.730326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-15 20:45:20.730357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-15 20:45:20.749679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-15 20:45:20.749710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-15 20:45:20.764569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-15 20:45:20.764600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-15 20:45:20.779831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-15 20:45:20.779863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-15 20:45:20.794588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-15 20:45:20.794619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-15 20:45:20.804916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-15 20:45:20.804948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-15 20:45:20.820000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-15 20:45:20.820032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-15 20:45:20.839071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-15 20:45:20.839105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-15 20:45:20.853689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-15 20:45:20.853721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-15 20:45:20.868065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-15 20:45:20.868099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-15 20:45:20.878948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-15 20:45:20.878981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-15 20:45:20.893938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-15 20:45:20.893969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-15 20:45:20.913163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-15 20:45:20.913203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-15 20:45:20.927907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-15 20:45:20.927939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-15 20:45:20.938520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-15 20:45:20.938551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-15 20:45:20.953313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-15 20:45:20.953354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.310 [2024-07-15 20:45:20.968431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.310 [2024-07-15 20:45:20.968461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.310 [2024-07-15 20:45:20.982744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.310 [2024-07-15 20:45:20.982777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:20.993866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:20.993896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.008782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.008813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.024013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.024044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.038847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.038879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.058119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.058151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.072596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.072630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.084017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.084050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.099109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.099142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.114418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.114451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.129215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.129245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.144840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.144872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.159526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.159557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.174821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.174856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.189526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.189557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.311 [2024-07-15 20:45:21.209124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.311 [2024-07-15 20:45:21.209154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.223922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.223953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.239214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.239241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.253640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.253672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.264425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.264457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.279368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.279401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.294571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.294604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.309283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.309315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.320118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.320151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.334626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.334659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.348430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.348462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.363162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.363202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.378716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.378746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.393034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.393062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.407558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.407591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.426965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.426997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.441717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.441747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.457585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.457616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.570 [2024-07-15 20:45:21.471863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.570 [2024-07-15 20:45:21.471895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.487197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.487227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.501851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.501882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.512554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.512585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.527026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.527059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.541698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.541727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.557335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.557366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.574308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.574339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.588988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.589021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.604454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.604487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.619572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.619601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.634650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.634684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.649309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.649341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.664662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.664695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.678870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.678903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.689463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.689496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.704371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.704403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.719818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.719850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.828 [2024-07-15 20:45:21.734913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.828 [2024-07-15 20:45:21.734947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.750772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.750804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.764617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.764650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.779264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.779290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.798517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.798551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.813563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.813593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.829126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.829159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.842723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.842756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.857527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.857558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.873142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.873184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.887709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.887740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.902947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.902980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.916677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.916707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.931462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.931494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.946815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.946849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.961330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.961361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.976117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.976149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.087 [2024-07-15 20:45:21.991636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.087 [2024-07-15 20:45:21.991668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.005455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.005488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.020228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.020258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.031598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.031630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.046270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.046303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.056974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.057007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.071908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.071942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.087163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.087206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.101611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.101640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.117283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.117310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.131321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.131351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.142053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.142083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.156555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.156588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.170524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.170556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.185174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.185204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.200617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.200649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.215272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.215304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.234471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.234502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.346 [2024-07-15 20:45:22.249079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.346 [2024-07-15 20:45:22.249110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.259842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.259875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.274637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.274670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.290139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.290183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.304425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.304456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.319073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.319107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.334612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.334643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.348855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.348886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.359318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.359348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.374019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.374049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.389108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.389142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.403559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.403590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 00:09:00.604 Latency(us) 00:09:00.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.604 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:00.604 Nvme1n1 : 5.01 16314.06 127.45 0.00 0.00 7838.67 3329.44 15897.09 00:09:00.604 =================================================================================================================== 00:09:00.604 Total : 16314.06 127.45 0.00 0.00 7838.67 3329.44 15897.09 00:09:00.604 [2024-07-15 20:45:22.415199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.415230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.427175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.427202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.443155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.443191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.459128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.459157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.471103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.471128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.483085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.483105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.495071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.495096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.604 [2024-07-15 20:45:22.511051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.604 [2024-07-15 20:45:22.511074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.864 [2024-07-15 20:45:22.523030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.864 [2024-07-15 20:45:22.523054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.864 [2024-07-15 20:45:22.535015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.864 [2024-07-15 20:45:22.535041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.864 [2024-07-15 20:45:22.546996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.864 [2024-07-15 20:45:22.547018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.864 [2024-07-15 20:45:22.558977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.864 [2024-07-15 20:45:22.558998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.864 [2024-07-15 20:45:22.574960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.864 [2024-07-15 20:45:22.574985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.864 [2024-07-15 20:45:22.586943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.864 [2024-07-15 20:45:22.586965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.864 [2024-07-15 20:45:22.598925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.864 [2024-07-15 20:45:22.598945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.864 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67623) - No such process 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67623 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.864 delay0 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.864 20:45:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:01.123 [2024-07-15 20:45:22.822308] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:07.704 Initializing NVMe Controllers 00:09:07.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:07.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:07.704 Initialization complete. Launching workers. 00:09:07.704 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 99 00:09:07.704 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 386, failed to submit 33 00:09:07.704 success 253, unsuccess 133, failed 0 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:07.704 rmmod nvme_tcp 00:09:07.704 rmmod nvme_fabrics 00:09:07.704 rmmod nvme_keyring 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67479 ']' 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67479 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67479 ']' 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67479 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:07.704 20:45:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67479 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:07.704 killing process with pid 67479 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67479' 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67479 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67479 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:07.704 00:09:07.704 real 0m24.567s 00:09:07.704 user 0m39.930s 00:09:07.704 sys 0m7.976s 00:09:07.704 ************************************ 00:09:07.704 END TEST nvmf_zcopy 00:09:07.704 ************************************ 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.704 20:45:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.704 20:45:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:07.704 20:45:29 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.704 20:45:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:07.704 20:45:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.704 20:45:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:07.704 ************************************ 00:09:07.704 START TEST nvmf_nmic 00:09:07.704 ************************************ 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:07.704 * Looking for test storage... 00:09:07.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.704 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:07.705 Cannot find device "nvmf_tgt_br" 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:07.705 Cannot find device "nvmf_tgt_br2" 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:07.705 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:07.964 Cannot find device "nvmf_tgt_br" 00:09:07.964 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:07.965 Cannot find device "nvmf_tgt_br2" 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:07.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:07.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:07.965 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:08.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:09:08.223 00:09:08.223 --- 10.0.0.2 ping statistics --- 00:09:08.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.223 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:08.223 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:08.223 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:08.223 00:09:08.223 --- 10.0.0.3 ping statistics --- 00:09:08.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.223 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:08.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:08.223 00:09:08.223 --- 10.0.0.1 ping statistics --- 00:09:08.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.223 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67942 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67942 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 67942 ']' 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.223 20:45:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.223 [2024-07-15 20:45:29.991256] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:09:08.223 [2024-07-15 20:45:29.991312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.506 [2024-07-15 20:45:30.133728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.506 [2024-07-15 20:45:30.209947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.506 [2024-07-15 20:45:30.210027] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.506 [2024-07-15 20:45:30.210038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.506 [2024-07-15 20:45:30.210046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.506 [2024-07-15 20:45:30.210052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.506 [2024-07-15 20:45:30.210252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.506 [2024-07-15 20:45:30.210536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.506 [2024-07-15 20:45:30.211263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.506 [2024-07-15 20:45:30.211265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.506 [2024-07-15 20:45:30.253060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.074 [2024-07-15 20:45:30.871320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.074 Malloc0 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.074 [2024-07-15 20:45:30.942748] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.074 test case1: single bdev can't be used in multiple subsystems 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.074 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.074 [2024-07-15 20:45:30.974583] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:09.074 [2024-07-15 20:45:30.974617] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:09.074 [2024-07-15 20:45:30.974626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.074 request: 00:09:09.074 { 00:09:09.074 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:09.074 "namespace": { 00:09:09.074 "bdev_name": "Malloc0", 00:09:09.074 "no_auto_visible": false 00:09:09.074 }, 00:09:09.074 "method": "nvmf_subsystem_add_ns", 00:09:09.074 "req_id": 1 00:09:09.074 } 00:09:09.074 Got JSON-RPC error response 00:09:09.074 response: 00:09:09.074 { 00:09:09.074 "code": -32602, 00:09:09.074 "message": "Invalid parameters" 00:09:09.074 } 00:09:09.332 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:09.332 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:09.332 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:09.332 Adding namespace failed - expected result. 00:09:09.332 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:09.332 test case2: host connect to nvmf target in multiple paths 00:09:09.332 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:09.332 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:09.332 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.332 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.332 [2024-07-15 20:45:30.990655] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:09.332 20:45:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.332 20:45:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid=69e37e11-dc2b-47bc-a2e9-49065053d84e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.332 20:45:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid=69e37e11-dc2b-47bc-a2e9-49065053d84e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:09.591 20:45:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.591 20:45:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:09.591 20:45:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.591 20:45:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:09.591 20:45:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:11.513 20:45:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:11.513 20:45:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:11.513 20:45:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.513 20:45:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:11.513 20:45:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.513 20:45:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:11.513 20:45:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:11.513 [global] 00:09:11.513 thread=1 00:09:11.513 invalidate=1 00:09:11.513 rw=write 00:09:11.513 time_based=1 00:09:11.513 runtime=1 00:09:11.513 ioengine=libaio 00:09:11.513 direct=1 00:09:11.513 bs=4096 00:09:11.513 iodepth=1 00:09:11.513 norandommap=0 00:09:11.513 numjobs=1 00:09:11.513 00:09:11.513 verify_dump=1 00:09:11.513 verify_backlog=512 00:09:11.513 verify_state_save=0 00:09:11.513 do_verify=1 00:09:11.513 verify=crc32c-intel 00:09:11.513 [job0] 00:09:11.513 filename=/dev/nvme0n1 00:09:11.513 Could not set queue depth (nvme0n1) 00:09:11.779 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.779 fio-3.35 00:09:11.779 Starting 1 thread 00:09:12.713 00:09:12.713 job0: (groupid=0, jobs=1): err= 0: pid=68039: Mon Jul 15 20:45:34 2024 00:09:12.713 read: IOPS=3952, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1001msec) 00:09:12.713 slat (nsec): min=7228, max=22118, avg=7733.65, stdev=774.47 00:09:12.713 clat (usec): min=106, max=193, avg=141.82, stdev=14.22 00:09:12.713 lat (usec): min=114, max=201, avg=149.55, stdev=14.25 00:09:12.713 clat percentiles (usec): 00:09:12.713 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 129], 00:09:12.713 | 30.00th=[ 135], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 147], 00:09:12.713 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 159], 95.00th=[ 163], 00:09:12.713 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 186], 99.95th=[ 186], 00:09:12.713 | 99.99th=[ 194] 00:09:12.713 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:12.713 slat (usec): min=11, max=135, avg=13.15, stdev= 5.95 00:09:12.713 clat (usec): min=62, max=489, avg=84.85, stdev=14.06 00:09:12.713 lat (usec): min=75, max=501, avg=97.99, stdev=16.13 00:09:12.713 clat percentiles (usec): 00:09:12.713 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 75], 00:09:12.713 | 30.00th=[ 79], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 88], 00:09:12.713 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 98], 95.00th=[ 101], 00:09:12.713 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 169], 99.95th=[ 318], 00:09:12.713 | 99.99th=[ 490] 00:09:12.713 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:09:12.713 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:12.713 lat (usec) : 100=47.84%, 250=52.12%, 500=0.04% 00:09:12.713 cpu : usr=1.40%, sys=7.00%, ctx=8055, majf=0, minf=2 00:09:12.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.713 issued rwts: total=3956,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.713 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.713 00:09:12.713 Run status group 0 (all jobs): 00:09:12.713 READ: bw=15.4MiB/s (16.2MB/s), 15.4MiB/s-15.4MiB/s (16.2MB/s-16.2MB/s), io=15.5MiB (16.2MB), run=1001-1001msec 00:09:12.713 WRITE: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:09:12.713 00:09:12.713 Disk stats (read/write): 00:09:12.713 nvme0n1: ios=3634/3618, merge=0/0, ticks=524/328, in_queue=852, util=91.37% 00:09:12.713 20:45:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:12.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.972 rmmod nvme_tcp 00:09:12.972 rmmod nvme_fabrics 00:09:12.972 rmmod nvme_keyring 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67942 ']' 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67942 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 67942 ']' 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 67942 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67942 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.972 killing process with pid 67942 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67942' 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 67942 00:09:12.972 20:45:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 67942 00:09:13.231 20:45:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.231 20:45:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.231 20:45:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.231 20:45:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.231 20:45:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.231 20:45:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.231 20:45:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.231 20:45:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.231 20:45:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:13.231 00:09:13.231 real 0m5.723s 00:09:13.231 user 0m17.867s 00:09:13.231 sys 0m2.467s 00:09:13.231 20:45:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.231 20:45:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.231 ************************************ 00:09:13.231 END TEST nvmf_nmic 00:09:13.231 ************************************ 00:09:13.491 20:45:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:13.491 20:45:35 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:13.491 20:45:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:13.491 20:45:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.491 20:45:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.491 ************************************ 00:09:13.491 START TEST nvmf_fio_target 00:09:13.491 ************************************ 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:13.491 * Looking for test storage... 00:09:13.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.491 20:45:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:13.492 Cannot find device "nvmf_tgt_br" 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.492 Cannot find device "nvmf_tgt_br2" 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:13.492 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:13.752 Cannot find device "nvmf_tgt_br" 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:13.752 Cannot find device "nvmf_tgt_br2" 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:13.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:13.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:13.752 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:14.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:09:14.012 00:09:14.012 --- 10.0.0.2 ping statistics --- 00:09:14.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.012 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:14.012 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.012 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:09:14.012 00:09:14.012 --- 10.0.0.3 ping statistics --- 00:09:14.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.012 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:09:14.012 00:09:14.012 --- 10.0.0.1 ping statistics --- 00:09:14.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.012 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68217 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68217 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68217 ']' 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.012 20:45:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.012 [2024-07-15 20:45:35.881931] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:09:14.012 [2024-07-15 20:45:35.881991] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.271 [2024-07-15 20:45:36.011413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.272 [2024-07-15 20:45:36.097230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.272 [2024-07-15 20:45:36.097279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.272 [2024-07-15 20:45:36.097289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.272 [2024-07-15 20:45:36.097297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.272 [2024-07-15 20:45:36.097304] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.272 [2024-07-15 20:45:36.097518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.272 [2024-07-15 20:45:36.097708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.272 [2024-07-15 20:45:36.098666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.272 [2024-07-15 20:45:36.098667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.272 [2024-07-15 20:45:36.140158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.839 20:45:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.839 20:45:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:09:14.839 20:45:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:14.839 20:45:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:14.839 20:45:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.098 20:45:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.098 20:45:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:15.098 [2024-07-15 20:45:36.947307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.098 20:45:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.357 20:45:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:15.357 20:45:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.615 20:45:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:15.615 20:45:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.873 20:45:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:15.873 20:45:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.132 20:45:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:16.132 20:45:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:16.132 20:45:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.391 20:45:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:16.391 20:45:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.649 20:45:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:16.649 20:45:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.908 20:45:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:16.908 20:45:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:16.908 20:45:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:17.168 20:45:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:17.168 20:45:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.428 20:45:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:17.428 20:45:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.428 20:45:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.687 [2024-07-15 20:45:39.455633] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.687 20:45:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:17.946 20:45:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:17.946 20:45:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid=69e37e11-dc2b-47bc-a2e9-49065053d84e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.205 20:45:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:18.205 20:45:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:18.205 20:45:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.205 20:45:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:18.205 20:45:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:18.205 20:45:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.123 20:45:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.123 20:45:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.123 20:45:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.123 20:45:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:20.123 20:45:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.123 20:45:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:20.123 20:45:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:20.123 [global] 00:09:20.123 thread=1 00:09:20.123 invalidate=1 00:09:20.123 rw=write 00:09:20.123 time_based=1 00:09:20.123 runtime=1 00:09:20.123 ioengine=libaio 00:09:20.123 direct=1 00:09:20.123 bs=4096 00:09:20.123 iodepth=1 00:09:20.123 norandommap=0 00:09:20.123 numjobs=1 00:09:20.123 00:09:20.123 verify_dump=1 00:09:20.123 verify_backlog=512 00:09:20.123 verify_state_save=0 00:09:20.123 do_verify=1 00:09:20.123 verify=crc32c-intel 00:09:20.123 [job0] 00:09:20.123 filename=/dev/nvme0n1 00:09:20.123 [job1] 00:09:20.123 filename=/dev/nvme0n2 00:09:20.123 [job2] 00:09:20.123 filename=/dev/nvme0n3 00:09:20.123 [job3] 00:09:20.123 filename=/dev/nvme0n4 00:09:20.382 Could not set queue depth (nvme0n1) 00:09:20.382 Could not set queue depth (nvme0n2) 00:09:20.382 Could not set queue depth (nvme0n3) 00:09:20.382 Could not set queue depth (nvme0n4) 00:09:20.382 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.382 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.382 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.382 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.382 fio-3.35 00:09:20.382 Starting 4 threads 00:09:21.757 00:09:21.757 job0: (groupid=0, jobs=1): err= 0: pid=68390: Mon Jul 15 20:45:43 2024 00:09:21.757 read: IOPS=2472, BW=9890KiB/s (10.1MB/s)(9900KiB/1001msec) 00:09:21.757 slat (nsec): min=5712, max=19056, avg=7257.98, stdev=954.66 00:09:21.757 clat (usec): min=149, max=1860, avg=214.86, stdev=36.98 00:09:21.757 lat (usec): min=159, max=1868, avg=222.12, stdev=37.03 00:09:21.757 clat percentiles (usec): 00:09:21.757 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:09:21.757 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 217], 00:09:21.757 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 243], 00:09:21.757 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 359], 99.95th=[ 437], 00:09:21.757 | 99.99th=[ 1860] 00:09:21.757 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:21.757 slat (usec): min=7, max=107, avg=11.18, stdev= 4.87 00:09:21.757 clat (usec): min=90, max=313, avg=163.20, stdev=13.96 00:09:21.757 lat (usec): min=124, max=421, avg=174.38, stdev=15.67 00:09:21.757 clat percentiles (usec): 00:09:21.757 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:09:21.757 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:09:21.757 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:09:21.757 | 99.00th=[ 200], 99.50th=[ 212], 99.90th=[ 233], 99.95th=[ 235], 00:09:21.757 | 99.99th=[ 314] 00:09:21.757 bw ( KiB/s): min=11968, max=11968, per=23.90%, avg=11968.00, stdev= 0.00, samples=1 00:09:21.757 iops : min= 2992, max= 2992, avg=2992.00, stdev= 0.00, samples=1 00:09:21.757 lat (usec) : 100=0.08%, 250=98.77%, 500=1.13% 00:09:21.757 lat (msec) : 2=0.02% 00:09:21.757 cpu : usr=0.90%, sys=4.10%, ctx=5036, majf=0, minf=9 00:09:21.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.757 issued rwts: total=2475,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.757 job1: (groupid=0, jobs=1): err= 0: pid=68391: Mon Jul 15 20:45:43 2024 00:09:21.757 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:21.757 slat (nsec): min=7169, max=25545, avg=7857.23, stdev=1290.82 00:09:21.757 clat (usec): min=115, max=5755, avg=150.07, stdev=179.08 00:09:21.757 lat (usec): min=122, max=5762, avg=157.93, stdev=179.39 00:09:21.757 clat percentiles (usec): 00:09:21.757 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 135], 00:09:21.757 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:09:21.757 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:09:21.757 | 99.00th=[ 174], 99.50th=[ 237], 99.90th=[ 4080], 99.95th=[ 4146], 00:09:21.757 | 99.99th=[ 5735] 00:09:21.757 write: IOPS=3821, BW=14.9MiB/s (15.7MB/s)(14.9MiB/1001msec); 0 zone resets 00:09:21.757 slat (usec): min=11, max=142, avg=12.86, stdev= 5.41 00:09:21.757 clat (usec): min=73, max=729, avg=98.92, stdev=17.71 00:09:21.757 lat (usec): min=85, max=741, avg=111.78, stdev=19.19 00:09:21.757 clat percentiles (usec): 00:09:21.757 | 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 91], 00:09:21.757 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 99], 00:09:21.757 | 70.00th=[ 102], 80.00th=[ 105], 90.00th=[ 112], 95.00th=[ 116], 00:09:21.757 | 99.00th=[ 128], 99.50th=[ 135], 99.90th=[ 330], 99.95th=[ 619], 00:09:21.757 | 99.99th=[ 734] 00:09:21.757 bw ( KiB/s): min=16384, max=16384, per=32.72%, avg=16384.00, stdev= 0.00, samples=1 00:09:21.757 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:21.757 lat (usec) : 100=32.58%, 250=67.12%, 500=0.15%, 750=0.04% 00:09:21.757 lat (msec) : 2=0.01%, 4=0.04%, 10=0.05% 00:09:21.757 cpu : usr=1.90%, sys=5.90%, ctx=7410, majf=0, minf=14 00:09:21.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.757 issued rwts: total=3584,3825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.757 job2: (groupid=0, jobs=1): err= 0: pid=68392: Mon Jul 15 20:45:43 2024 00:09:21.757 read: IOPS=3533, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1001msec) 00:09:21.757 slat (nsec): min=9057, max=30778, avg=10482.90, stdev=1012.75 00:09:21.757 clat (usec): min=118, max=402, avg=146.19, stdev=12.16 00:09:21.757 lat (usec): min=129, max=413, avg=156.67, stdev=12.19 00:09:21.757 clat percentiles (usec): 00:09:21.757 | 1.00th=[ 128], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 137], 00:09:21.757 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:09:21.757 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 167], 00:09:21.757 | 99.00th=[ 178], 99.50th=[ 180], 99.90th=[ 233], 99.95th=[ 285], 00:09:21.757 | 99.99th=[ 404] 00:09:21.757 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:21.757 slat (usec): min=10, max=158, avg=16.29, stdev= 6.23 00:09:21.757 clat (usec): min=76, max=165, avg=106.05, stdev=12.03 00:09:21.757 lat (usec): min=91, max=308, avg=122.33, stdev=14.59 00:09:21.757 clat percentiles (usec): 00:09:21.757 | 1.00th=[ 83], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 96], 00:09:21.757 | 30.00th=[ 99], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 109], 00:09:21.757 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 123], 95.00th=[ 127], 00:09:21.757 | 99.00th=[ 139], 99.50th=[ 141], 99.90th=[ 151], 99.95th=[ 153], 00:09:21.757 | 99.99th=[ 165] 00:09:21.757 bw ( KiB/s): min=16384, max=16384, per=32.72%, avg=16384.00, stdev= 0.00, samples=1 00:09:21.757 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:21.757 lat (usec) : 100=17.92%, 250=82.04%, 500=0.04% 00:09:21.757 cpu : usr=1.80%, sys=7.50%, ctx=7122, majf=0, minf=5 00:09:21.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.757 issued rwts: total=3537,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.757 job3: (groupid=0, jobs=1): err= 0: pid=68393: Mon Jul 15 20:45:43 2024 00:09:21.757 read: IOPS=2471, BW=9886KiB/s (10.1MB/s)(9896KiB/1001msec) 00:09:21.757 slat (nsec): min=5849, max=39213, avg=7246.18, stdev=1263.57 00:09:21.757 clat (usec): min=133, max=1811, avg=214.97, stdev=36.03 00:09:21.757 lat (usec): min=145, max=1818, avg=222.21, stdev=36.11 00:09:21.757 clat percentiles (usec): 00:09:21.757 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:09:21.757 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 217], 00:09:21.757 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 243], 00:09:21.757 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 318], 99.95th=[ 474], 00:09:21.757 | 99.99th=[ 1811] 00:09:21.757 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:21.757 slat (nsec): min=10921, max=63314, avg=12955.02, stdev=4748.85 00:09:21.757 clat (usec): min=96, max=227, avg=161.26, stdev=13.23 00:09:21.757 lat (usec): min=130, max=270, avg=174.22, stdev=14.73 00:09:21.757 clat percentiles (usec): 00:09:21.757 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:09:21.757 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:09:21.757 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:09:21.757 | 99.00th=[ 196], 99.50th=[ 206], 99.90th=[ 223], 99.95th=[ 227], 00:09:21.757 | 99.99th=[ 227] 00:09:21.757 bw ( KiB/s): min=11960, max=11960, per=23.89%, avg=11960.00, stdev= 0.00, samples=1 00:09:21.757 iops : min= 2990, max= 2990, avg=2990.00, stdev= 0.00, samples=1 00:09:21.757 lat (usec) : 100=0.06%, 250=98.85%, 500=1.07% 00:09:21.757 lat (msec) : 2=0.02% 00:09:21.757 cpu : usr=1.60%, sys=4.10%, ctx=5036, majf=0, minf=9 00:09:21.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.757 issued rwts: total=2474,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.757 00:09:21.757 Run status group 0 (all jobs): 00:09:21.757 READ: bw=47.1MiB/s (49.4MB/s), 9886KiB/s-14.0MiB/s (10.1MB/s-14.7MB/s), io=47.1MiB (49.4MB), run=1001-1001msec 00:09:21.757 WRITE: bw=48.9MiB/s (51.3MB/s), 9.99MiB/s-14.9MiB/s (10.5MB/s-15.7MB/s), io=48.9MiB (51.3MB), run=1001-1001msec 00:09:21.757 00:09:21.757 Disk stats (read/write): 00:09:21.757 nvme0n1: ios=2097/2245, merge=0/0, ticks=460/336, in_queue=796, util=87.14% 00:09:21.757 nvme0n2: ios=3099/3171, merge=0/0, ticks=475/336, in_queue=811, util=86.21% 00:09:21.757 nvme0n3: ios=2981/3072, merge=0/0, ticks=451/349, in_queue=800, util=89.02% 00:09:21.757 nvme0n4: ios=2048/2242, merge=0/0, ticks=441/366, in_queue=807, util=89.66% 00:09:21.758 20:45:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:21.758 [global] 00:09:21.758 thread=1 00:09:21.758 invalidate=1 00:09:21.758 rw=randwrite 00:09:21.758 time_based=1 00:09:21.758 runtime=1 00:09:21.758 ioengine=libaio 00:09:21.758 direct=1 00:09:21.758 bs=4096 00:09:21.758 iodepth=1 00:09:21.758 norandommap=0 00:09:21.758 numjobs=1 00:09:21.758 00:09:21.758 verify_dump=1 00:09:21.758 verify_backlog=512 00:09:21.758 verify_state_save=0 00:09:21.758 do_verify=1 00:09:21.758 verify=crc32c-intel 00:09:21.758 [job0] 00:09:21.758 filename=/dev/nvme0n1 00:09:21.758 [job1] 00:09:21.758 filename=/dev/nvme0n2 00:09:21.758 [job2] 00:09:21.758 filename=/dev/nvme0n3 00:09:21.758 [job3] 00:09:21.758 filename=/dev/nvme0n4 00:09:21.758 Could not set queue depth (nvme0n1) 00:09:21.758 Could not set queue depth (nvme0n2) 00:09:21.758 Could not set queue depth (nvme0n3) 00:09:21.758 Could not set queue depth (nvme0n4) 00:09:21.758 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.758 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.758 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.758 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.758 fio-3.35 00:09:21.758 Starting 4 threads 00:09:23.136 00:09:23.136 job0: (groupid=0, jobs=1): err= 0: pid=68452: Mon Jul 15 20:45:44 2024 00:09:23.136 read: IOPS=3883, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1001msec) 00:09:23.136 slat (nsec): min=6751, max=24905, avg=7383.91, stdev=856.36 00:09:23.136 clat (usec): min=108, max=307, avg=131.75, stdev=10.59 00:09:23.136 lat (usec): min=115, max=314, avg=139.14, stdev=10.63 00:09:23.136 clat percentiles (usec): 00:09:23.136 | 1.00th=[ 114], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 124], 00:09:23.136 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 133], 00:09:23.136 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 151], 00:09:23.136 | 99.00th=[ 163], 99.50th=[ 165], 99.90th=[ 178], 99.95th=[ 289], 00:09:23.136 | 99.99th=[ 310] 00:09:23.136 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:23.136 slat (nsec): min=8400, max=88633, avg=12586.44, stdev=4155.08 00:09:23.136 clat (usec): min=74, max=202, avg=97.83, stdev=10.49 00:09:23.136 lat (usec): min=86, max=237, avg=110.42, stdev=11.90 00:09:23.136 clat percentiles (usec): 00:09:23.136 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:09:23.136 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 99], 00:09:23.136 | 70.00th=[ 102], 80.00th=[ 106], 90.00th=[ 113], 95.00th=[ 118], 00:09:23.137 | 99.00th=[ 128], 99.50th=[ 133], 99.90th=[ 147], 99.95th=[ 151], 00:09:23.137 | 99.99th=[ 202] 00:09:23.137 bw ( KiB/s): min=16384, max=16384, per=31.44%, avg=16384.00, stdev= 0.00, samples=1 00:09:23.137 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:23.137 lat (usec) : 100=32.77%, 250=67.21%, 500=0.03% 00:09:23.137 cpu : usr=1.90%, sys=6.40%, ctx=7983, majf=0, minf=18 00:09:23.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.137 issued rwts: total=3887,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.137 job1: (groupid=0, jobs=1): err= 0: pid=68453: Mon Jul 15 20:45:44 2024 00:09:23.137 read: IOPS=3880, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1001msec) 00:09:23.137 slat (nsec): min=7231, max=25518, avg=7670.04, stdev=688.03 00:09:23.137 clat (usec): min=111, max=470, avg=134.57, stdev=11.33 00:09:23.137 lat (usec): min=118, max=478, avg=142.24, stdev=11.35 00:09:23.137 clat percentiles (usec): 00:09:23.137 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 128], 00:09:23.137 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:09:23.137 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 151], 00:09:23.137 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 445], 00:09:23.137 | 99.99th=[ 469] 00:09:23.137 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:23.137 slat (nsec): min=11117, max=99349, avg=12872.15, stdev=5406.16 00:09:23.137 clat (usec): min=72, max=149, avg=94.71, stdev= 8.55 00:09:23.137 lat (usec): min=84, max=233, avg=107.59, stdev=11.33 00:09:23.137 clat percentiles (usec): 00:09:23.137 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 00:09:23.137 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 95], 00:09:23.137 | 70.00th=[ 98], 80.00th=[ 101], 90.00th=[ 106], 95.00th=[ 111], 00:09:23.137 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 139], 99.95th=[ 143], 00:09:23.137 | 99.99th=[ 151] 00:09:23.137 bw ( KiB/s): min=16384, max=16384, per=31.44%, avg=16384.00, stdev= 0.00, samples=1 00:09:23.137 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:23.137 lat (usec) : 100=39.55%, 250=60.43%, 500=0.03% 00:09:23.137 cpu : usr=2.00%, sys=6.40%, ctx=7980, majf=0, minf=5 00:09:23.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.137 issued rwts: total=3884,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.137 job2: (groupid=0, jobs=1): err= 0: pid=68454: Mon Jul 15 20:45:44 2024 00:09:23.137 read: IOPS=2052, BW=8212KiB/s (8409kB/s)(8220KiB/1001msec) 00:09:23.137 slat (nsec): min=7373, max=72088, avg=8454.21, stdev=3472.56 00:09:23.137 clat (usec): min=159, max=1796, avg=262.25, stdev=51.06 00:09:23.137 lat (usec): min=178, max=1804, avg=270.70, stdev=50.98 00:09:23.137 clat percentiles (usec): 00:09:23.137 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 233], 00:09:23.137 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:09:23.137 | 70.00th=[ 269], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 330], 00:09:23.137 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 498], 99.95th=[ 502], 00:09:23.137 | 99.99th=[ 1795] 00:09:23.137 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:23.137 slat (usec): min=11, max=104, avg=13.98, stdev= 6.57 00:09:23.137 clat (usec): min=88, max=242, avg=157.73, stdev=32.32 00:09:23.137 lat (usec): min=101, max=269, avg=171.71, stdev=31.92 00:09:23.137 clat percentiles (usec): 00:09:23.137 | 1.00th=[ 96], 5.00th=[ 105], 10.00th=[ 111], 20.00th=[ 118], 00:09:23.137 | 30.00th=[ 131], 40.00th=[ 163], 50.00th=[ 172], 60.00th=[ 176], 00:09:23.137 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:09:23.137 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 231], 99.95th=[ 233], 00:09:23.137 | 99.99th=[ 243] 00:09:23.137 bw ( KiB/s): min=10472, max=10472, per=20.10%, avg=10472.00, stdev= 0.00, samples=1 00:09:23.137 iops : min= 2618, max= 2618, avg=2618.00, stdev= 0.00, samples=1 00:09:23.137 lat (usec) : 100=1.19%, 250=78.22%, 500=20.54%, 750=0.02% 00:09:23.137 lat (msec) : 2=0.02% 00:09:23.137 cpu : usr=0.90%, sys=4.40%, ctx=4615, majf=0, minf=7 00:09:23.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.137 issued rwts: total=2055,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.137 job3: (groupid=0, jobs=1): err= 0: pid=68455: Mon Jul 15 20:45:44 2024 00:09:23.137 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:23.137 slat (nsec): min=9832, max=28139, avg=10705.10, stdev=1117.76 00:09:23.137 clat (usec): min=149, max=1527, avg=260.23, stdev=47.30 00:09:23.137 lat (usec): min=160, max=1537, avg=270.94, stdev=47.34 00:09:23.137 clat percentiles (usec): 00:09:23.137 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:09:23.137 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 253], 00:09:23.137 | 70.00th=[ 269], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 330], 00:09:23.137 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 412], 99.95th=[ 416], 00:09:23.137 | 99.99th=[ 1532] 00:09:23.137 write: IOPS=2285, BW=9143KiB/s (9362kB/s)(9152KiB/1001msec); 0 zone resets 00:09:23.137 slat (usec): min=14, max=110, avg=20.19, stdev=12.34 00:09:23.137 clat (usec): min=92, max=328, avg=172.11, stdev=43.78 00:09:23.137 lat (usec): min=107, max=385, avg=192.30, stdev=53.30 00:09:23.137 clat percentiles (usec): 00:09:23.137 | 1.00th=[ 102], 5.00th=[ 110], 10.00th=[ 116], 20.00th=[ 127], 00:09:23.137 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 178], 00:09:23.137 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 241], 95.00th=[ 269], 00:09:23.137 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 322], 99.95th=[ 326], 00:09:23.137 | 99.99th=[ 330] 00:09:23.137 bw ( KiB/s): min= 8192, max= 8192, per=15.72%, avg=8192.00, stdev= 0.00, samples=1 00:09:23.137 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:23.137 lat (usec) : 100=0.35%, 250=74.86%, 500=24.77% 00:09:23.137 lat (msec) : 2=0.02% 00:09:23.137 cpu : usr=1.60%, sys=5.00%, ctx=4336, majf=0, minf=15 00:09:23.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.137 issued rwts: total=2048,2288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.137 00:09:23.137 Run status group 0 (all jobs): 00:09:23.137 READ: bw=46.3MiB/s (48.6MB/s), 8184KiB/s-15.2MiB/s (8380kB/s-15.9MB/s), io=46.4MiB (48.6MB), run=1001-1001msec 00:09:23.137 WRITE: bw=50.9MiB/s (53.4MB/s), 9143KiB/s-16.0MiB/s (9362kB/s-16.8MB/s), io=50.9MiB (53.4MB), run=1001-1001msec 00:09:23.137 00:09:23.137 Disk stats (read/write): 00:09:23.137 nvme0n1: ios=3426/3584, merge=0/0, ticks=469/364, in_queue=833, util=89.08% 00:09:23.137 nvme0n2: ios=3422/3584, merge=0/0, ticks=475/357, in_queue=832, util=89.51% 00:09:23.137 nvme0n3: ios=1929/2048, merge=0/0, ticks=517/330, in_queue=847, util=89.97% 00:09:23.137 nvme0n4: ios=1775/2048, merge=0/0, ticks=477/381, in_queue=858, util=90.23% 00:09:23.137 20:45:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:23.137 [global] 00:09:23.137 thread=1 00:09:23.137 invalidate=1 00:09:23.137 rw=write 00:09:23.137 time_based=1 00:09:23.137 runtime=1 00:09:23.137 ioengine=libaio 00:09:23.137 direct=1 00:09:23.137 bs=4096 00:09:23.137 iodepth=128 00:09:23.137 norandommap=0 00:09:23.137 numjobs=1 00:09:23.137 00:09:23.137 verify_dump=1 00:09:23.137 verify_backlog=512 00:09:23.137 verify_state_save=0 00:09:23.137 do_verify=1 00:09:23.137 verify=crc32c-intel 00:09:23.137 [job0] 00:09:23.137 filename=/dev/nvme0n1 00:09:23.137 [job1] 00:09:23.137 filename=/dev/nvme0n2 00:09:23.137 [job2] 00:09:23.137 filename=/dev/nvme0n3 00:09:23.137 [job3] 00:09:23.137 filename=/dev/nvme0n4 00:09:23.137 Could not set queue depth (nvme0n1) 00:09:23.137 Could not set queue depth (nvme0n2) 00:09:23.137 Could not set queue depth (nvme0n3) 00:09:23.137 Could not set queue depth (nvme0n4) 00:09:23.396 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:23.396 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:23.396 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:23.396 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:23.396 fio-3.35 00:09:23.396 Starting 4 threads 00:09:24.773 00:09:24.773 job0: (groupid=0, jobs=1): err= 0: pid=68508: Mon Jul 15 20:45:46 2024 00:09:24.773 read: IOPS=6190, BW=24.2MiB/s (25.4MB/s)(24.3MiB/1003msec) 00:09:24.773 slat (usec): min=18, max=2013, avg=72.10, stdev=286.53 00:09:24.773 clat (usec): min=323, max=11265, avg=9980.58, stdev=761.39 00:09:24.773 lat (usec): min=2203, max=11289, avg=10052.68, stdev=708.44 00:09:24.773 clat percentiles (usec): 00:09:24.773 | 1.00th=[ 5473], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:09:24.773 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10028], 60.00th=[10159], 00:09:24.773 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10421], 95.00th=[10683], 00:09:24.773 | 99.00th=[10945], 99.50th=[11076], 99.90th=[11076], 99.95th=[11076], 00:09:24.773 | 99.99th=[11207] 00:09:24.773 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:09:24.773 slat (usec): min=19, max=4681, avg=72.37, stdev=226.39 00:09:24.773 clat (usec): min=6919, max=18338, avg=9745.03, stdev=1123.68 00:09:24.773 lat (usec): min=6952, max=18384, avg=9817.40, stdev=1109.49 00:09:24.773 clat percentiles (usec): 00:09:24.773 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9372], 00:09:24.773 | 30.00th=[ 9503], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9634], 00:09:24.773 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10028], 95.00th=[10290], 00:09:24.773 | 99.00th=[16581], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:09:24.773 | 99.99th=[18220] 00:09:24.773 bw ( KiB/s): min=25908, max=25908, per=34.51%, avg=25908.00, stdev= 0.00, samples=1 00:09:24.773 iops : min= 6477, max= 6477, avg=6477.00, stdev= 0.00, samples=1 00:09:24.773 lat (usec) : 500=0.01% 00:09:24.773 lat (msec) : 4=0.25%, 10=64.87%, 20=34.87% 00:09:24.773 cpu : usr=8.98%, sys=23.75%, ctx=414, majf=0, minf=11 00:09:24.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:24.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.773 issued rwts: total=6209,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.773 job1: (groupid=0, jobs=1): err= 0: pid=68509: Mon Jul 15 20:45:46 2024 00:09:24.773 read: IOPS=2332, BW=9329KiB/s (9553kB/s)(9348KiB/1002msec) 00:09:24.773 slat (usec): min=13, max=12606, avg=234.52, stdev=1289.10 00:09:24.773 clat (usec): min=375, max=51824, avg=29658.42, stdev=8583.73 00:09:24.773 lat (usec): min=7657, max=51852, avg=29892.94, stdev=8550.97 00:09:24.773 clat percentiles (usec): 00:09:24.773 | 1.00th=[ 8455], 5.00th=[20317], 10.00th=[21365], 20.00th=[22414], 00:09:24.773 | 30.00th=[24773], 40.00th=[26084], 50.00th=[27657], 60.00th=[29754], 00:09:24.773 | 70.00th=[32113], 80.00th=[35914], 90.00th=[45876], 95.00th=[48497], 00:09:24.773 | 99.00th=[51643], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:09:24.773 | 99.99th=[51643] 00:09:24.773 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:09:24.773 slat (usec): min=14, max=11059, avg=163.79, stdev=830.15 00:09:24.773 clat (usec): min=8141, max=47338, avg=21842.83, stdev=8621.05 00:09:24.773 lat (usec): min=10012, max=47379, avg=22006.62, stdev=8627.17 00:09:24.773 clat percentiles (usec): 00:09:24.773 | 1.00th=[10159], 5.00th=[11600], 10.00th=[13960], 20.00th=[15139], 00:09:24.773 | 30.00th=[16450], 40.00th=[19268], 50.00th=[19530], 60.00th=[20055], 00:09:24.773 | 70.00th=[21103], 80.00th=[30016], 90.00th=[34866], 95.00th=[40109], 00:09:24.774 | 99.00th=[46924], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:09:24.774 | 99.99th=[47449] 00:09:24.774 bw ( KiB/s): min=10248, max=10248, per=13.65%, avg=10248.00, stdev= 0.00, samples=1 00:09:24.774 iops : min= 2562, max= 2562, avg=2562.00, stdev= 0.00, samples=1 00:09:24.774 lat (usec) : 500=0.02% 00:09:24.774 lat (msec) : 10=0.96%, 20=32.35%, 50=65.41%, 100=1.27% 00:09:24.774 cpu : usr=3.30%, sys=12.59%, ctx=157, majf=0, minf=19 00:09:24.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:24.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.774 issued rwts: total=2337,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.774 job2: (groupid=0, jobs=1): err= 0: pid=68510: Mon Jul 15 20:45:46 2024 00:09:24.774 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:09:24.774 slat (usec): min=10, max=16637, avg=148.86, stdev=996.20 00:09:24.774 clat (usec): min=13548, max=36198, avg=20965.34, stdev=3357.79 00:09:24.774 lat (usec): min=13579, max=39579, avg=21114.20, stdev=3446.99 00:09:24.774 clat percentiles (usec): 00:09:24.774 | 1.00th=[15139], 5.00th=[16712], 10.00th=[17957], 20.00th=[18482], 00:09:24.774 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[20055], 00:09:24.774 | 70.00th=[23462], 80.00th=[25297], 90.00th=[25822], 95.00th=[26084], 00:09:24.774 | 99.00th=[26870], 99.50th=[29492], 99.90th=[35390], 99.95th=[35914], 00:09:24.774 | 99.99th=[36439] 00:09:24.774 write: IOPS=3831, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1002msec); 0 zone resets 00:09:24.774 slat (usec): min=23, max=7854, avg=110.43, stdev=572.72 00:09:24.774 clat (usec): min=740, max=25961, avg=13470.76, stdev=1707.41 00:09:24.774 lat (usec): min=5816, max=26017, avg=13581.19, stdev=1635.84 00:09:24.774 clat percentiles (usec): 00:09:24.774 | 1.00th=[ 7111], 5.00th=[11207], 10.00th=[12125], 20.00th=[12518], 00:09:24.774 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[13566], 00:09:24.774 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15270], 95.00th=[15401], 00:09:24.774 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19792], 99.95th=[25822], 00:09:24.774 | 99.99th=[26084] 00:09:24.774 bw ( KiB/s): min=16384, max=16384, per=21.83%, avg=16384.00, stdev= 0.00, samples=1 00:09:24.774 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:24.774 lat (usec) : 750=0.01% 00:09:24.774 lat (msec) : 10=1.32%, 20=79.19%, 50=19.48% 00:09:24.774 cpu : usr=4.40%, sys=15.48%, ctx=162, majf=0, minf=9 00:09:24.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:24.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.774 issued rwts: total=3584,3839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.774 job3: (groupid=0, jobs=1): err= 0: pid=68511: Mon Jul 15 20:45:46 2024 00:09:24.774 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:09:24.774 slat (usec): min=5, max=7725, avg=85.26, stdev=403.47 00:09:24.774 clat (usec): min=7759, max=19966, avg=11495.52, stdev=1085.38 00:09:24.774 lat (usec): min=7767, max=20409, avg=11580.78, stdev=1121.72 00:09:24.774 clat percentiles (usec): 00:09:24.774 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10552], 20.00th=[10814], 00:09:24.774 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:09:24.774 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12911], 95.00th=[13829], 00:09:24.774 | 99.00th=[15270], 99.50th=[15664], 99.90th=[18482], 99.95th=[19268], 00:09:24.774 | 99.99th=[20055] 00:09:24.774 write: IOPS=5756, BW=22.5MiB/s (23.6MB/s)(22.5MiB/1002msec); 0 zone resets 00:09:24.774 slat (usec): min=7, max=6984, avg=79.60, stdev=357.52 00:09:24.774 clat (usec): min=223, max=19591, avg=10737.55, stdev=1273.07 00:09:24.774 lat (usec): min=3142, max=19621, avg=10817.16, stdev=1312.10 00:09:24.774 clat percentiles (usec): 00:09:24.774 | 1.00th=[ 5407], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10159], 00:09:24.774 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:09:24.774 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11863], 95.00th=[13304], 00:09:24.774 | 99.00th=[14484], 99.50th=[14877], 99.90th=[16057], 99.95th=[16909], 00:09:24.774 | 99.99th=[19530] 00:09:24.774 bw ( KiB/s): min=23736, max=23736, per=31.62%, avg=23736.00, stdev= 0.00, samples=1 00:09:24.774 iops : min= 5934, max= 5934, avg=5934.00, stdev= 0.00, samples=1 00:09:24.774 lat (usec) : 250=0.01% 00:09:24.774 lat (msec) : 4=0.19%, 10=7.89%, 20=91.90% 00:09:24.774 cpu : usr=6.89%, sys=20.88%, ctx=366, majf=0, minf=11 00:09:24.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:24.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.774 issued rwts: total=5632,5768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.774 00:09:24.774 Run status group 0 (all jobs): 00:09:24.774 READ: bw=69.2MiB/s (72.5MB/s), 9329KiB/s-24.2MiB/s (9553kB/s-25.4MB/s), io=69.4MiB (72.8MB), run=1002-1003msec 00:09:24.774 WRITE: bw=73.3MiB/s (76.9MB/s), 9.98MiB/s-25.9MiB/s (10.5MB/s-27.2MB/s), io=73.5MiB (77.1MB), run=1002-1003msec 00:09:24.774 00:09:24.774 Disk stats (read/write): 00:09:24.774 nvme0n1: ios=5586/5632, merge=0/0, ticks=11468/10321, in_queue=21789, util=89.18% 00:09:24.774 nvme0n2: ios=2097/2144, merge=0/0, ticks=15093/9707, in_queue=24800, util=89.00% 00:09:24.774 nvme0n3: ios=3099/3392, merge=0/0, ticks=60638/41078, in_queue=101716, util=89.87% 00:09:24.774 nvme0n4: ios=4753/5120, merge=0/0, ticks=24475/21320, in_queue=45795, util=89.20% 00:09:24.774 20:45:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:24.774 [global] 00:09:24.774 thread=1 00:09:24.774 invalidate=1 00:09:24.774 rw=randwrite 00:09:24.774 time_based=1 00:09:24.774 runtime=1 00:09:24.774 ioengine=libaio 00:09:24.774 direct=1 00:09:24.774 bs=4096 00:09:24.774 iodepth=128 00:09:24.774 norandommap=0 00:09:24.774 numjobs=1 00:09:24.774 00:09:24.774 verify_dump=1 00:09:24.774 verify_backlog=512 00:09:24.774 verify_state_save=0 00:09:24.774 do_verify=1 00:09:24.774 verify=crc32c-intel 00:09:24.774 [job0] 00:09:24.774 filename=/dev/nvme0n1 00:09:24.774 [job1] 00:09:24.774 filename=/dev/nvme0n2 00:09:24.774 [job2] 00:09:24.774 filename=/dev/nvme0n3 00:09:24.774 [job3] 00:09:24.774 filename=/dev/nvme0n4 00:09:24.774 Could not set queue depth (nvme0n1) 00:09:24.774 Could not set queue depth (nvme0n2) 00:09:24.774 Could not set queue depth (nvme0n3) 00:09:24.774 Could not set queue depth (nvme0n4) 00:09:24.774 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.774 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.774 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.774 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.774 fio-3.35 00:09:24.774 Starting 4 threads 00:09:26.151 00:09:26.151 job0: (groupid=0, jobs=1): err= 0: pid=68576: Mon Jul 15 20:45:47 2024 00:09:26.151 read: IOPS=2962, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1006msec) 00:09:26.151 slat (usec): min=3, max=11343, avg=170.70, stdev=724.23 00:09:26.151 clat (usec): min=5059, max=31057, avg=21154.21, stdev=3526.04 00:09:26.151 lat (usec): min=6818, max=31078, avg=21324.92, stdev=3528.41 00:09:26.151 clat percentiles (usec): 00:09:26.151 | 1.00th=[10945], 5.00th=[14484], 10.00th=[16909], 20.00th=[19268], 00:09:26.151 | 30.00th=[20317], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:09:26.151 | 70.00th=[22152], 80.00th=[23200], 90.00th=[25035], 95.00th=[26870], 00:09:26.151 | 99.00th=[29754], 99.50th=[30278], 99.90th=[31065], 99.95th=[31065], 00:09:26.151 | 99.99th=[31065] 00:09:26.151 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:09:26.151 slat (usec): min=4, max=6758, avg=149.35, stdev=557.99 00:09:26.151 clat (usec): min=7178, max=30548, avg=20437.63, stdev=3224.21 00:09:26.151 lat (usec): min=7210, max=30852, avg=20586.98, stdev=3232.86 00:09:26.151 clat percentiles (usec): 00:09:26.151 | 1.00th=[ 9503], 5.00th=[15664], 10.00th=[16581], 20.00th=[18220], 00:09:26.151 | 30.00th=[19268], 40.00th=[19792], 50.00th=[20579], 60.00th=[21627], 00:09:26.151 | 70.00th=[21890], 80.00th=[22152], 90.00th=[23462], 95.00th=[25560], 00:09:26.151 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:09:26.151 | 99.99th=[30540] 00:09:26.151 bw ( KiB/s): min=12288, max=12288, per=16.43%, avg=12288.00, stdev= 0.00, samples=2 00:09:26.151 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:26.151 lat (msec) : 10=1.17%, 20=33.87%, 50=64.95% 00:09:26.151 cpu : usr=2.89%, sys=10.45%, ctx=931, majf=0, minf=17 00:09:26.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:26.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.151 issued rwts: total=2980,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.151 job1: (groupid=0, jobs=1): err= 0: pid=68577: Mon Jul 15 20:45:47 2024 00:09:26.151 read: IOPS=5631, BW=22.0MiB/s (23.1MB/s)(22.2MiB/1007msec) 00:09:26.151 slat (usec): min=6, max=5080, avg=81.13, stdev=323.11 00:09:26.151 clat (usec): min=4973, max=28925, avg=11025.94, stdev=3064.96 00:09:26.151 lat (usec): min=6660, max=28934, avg=11107.07, stdev=3091.30 00:09:26.151 clat percentiles (usec): 00:09:26.151 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:09:26.151 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:09:26.151 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11863], 95.00th=[19792], 00:09:26.151 | 99.00th=[22938], 99.50th=[27395], 99.90th=[28443], 99.95th=[28967], 00:09:26.151 | 99.99th=[28967] 00:09:26.151 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:09:26.151 slat (usec): min=3, max=6745, avg=77.85, stdev=320.46 00:09:26.151 clat (usec): min=6332, max=28898, avg=10563.23, stdev=3228.53 00:09:26.151 lat (usec): min=6377, max=28916, avg=10641.08, stdev=3258.58 00:09:26.151 clat percentiles (usec): 00:09:26.151 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9110], 00:09:26.151 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:09:26.151 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[15270], 95.00th=[18744], 00:09:26.151 | 99.00th=[22676], 99.50th=[27395], 99.90th=[28443], 99.95th=[28443], 00:09:26.151 | 99.99th=[28967] 00:09:26.151 bw ( KiB/s): min=21720, max=26674, per=32.34%, avg=24197.00, stdev=3503.01, samples=2 00:09:26.151 iops : min= 5430, max= 6668, avg=6049.00, stdev=875.40, samples=2 00:09:26.151 lat (msec) : 10=57.33%, 20=39.03%, 50=3.64% 00:09:26.151 cpu : usr=6.56%, sys=21.57%, ctx=530, majf=0, minf=7 00:09:26.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:26.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.151 issued rwts: total=5671,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.151 job2: (groupid=0, jobs=1): err= 0: pid=68578: Mon Jul 15 20:45:47 2024 00:09:26.151 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:09:26.151 slat (usec): min=12, max=7376, avg=79.30, stdev=434.50 00:09:26.151 clat (usec): min=7524, max=18854, avg=11458.78, stdev=1208.57 00:09:26.151 lat (usec): min=7545, max=21243, avg=11538.08, stdev=1231.67 00:09:26.151 clat percentiles (usec): 00:09:26.151 | 1.00th=[ 7898], 5.00th=[10421], 10.00th=[10683], 20.00th=[11076], 00:09:26.151 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:09:26.151 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:09:26.151 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18744], 99.95th=[18744], 00:09:26.151 | 99.99th=[18744] 00:09:26.151 write: IOPS=6009, BW=23.5MiB/s (24.6MB/s)(23.5MiB/1002msec); 0 zone resets 00:09:26.151 slat (usec): min=23, max=6056, avg=80.69, stdev=360.25 00:09:26.151 clat (usec): min=1137, max=14319, avg=10352.86, stdev=1060.15 00:09:26.151 lat (usec): min=1170, max=16291, avg=10433.55, stdev=1019.64 00:09:26.151 clat percentiles (usec): 00:09:26.151 | 1.00th=[ 6456], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:09:26.151 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:09:26.151 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11338], 00:09:26.151 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14222], 99.95th=[14353], 00:09:26.151 | 99.99th=[14353] 00:09:26.151 bw ( KiB/s): min=22584, max=24625, per=31.55%, avg=23604.50, stdev=1443.20, samples=2 00:09:26.151 iops : min= 5646, max= 6156, avg=5901.00, stdev=360.62, samples=2 00:09:26.151 lat (msec) : 2=0.09%, 10=16.84%, 20=83.07% 00:09:26.151 cpu : usr=6.29%, sys=24.18%, ctx=252, majf=0, minf=13 00:09:26.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:26.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.151 issued rwts: total=5632,6022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.151 job3: (groupid=0, jobs=1): err= 0: pid=68579: Mon Jul 15 20:45:47 2024 00:09:26.151 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:09:26.151 slat (usec): min=7, max=8445, avg=141.99, stdev=579.21 00:09:26.151 clat (usec): min=1873, max=33435, avg=18174.22, stdev=6084.75 00:09:26.151 lat (usec): min=1893, max=33462, avg=18316.21, stdev=6129.60 00:09:26.152 clat percentiles (usec): 00:09:26.152 | 1.00th=[ 5604], 5.00th=[10421], 10.00th=[10683], 20.00th=[11207], 00:09:26.152 | 30.00th=[11600], 40.00th=[17957], 50.00th=[20579], 60.00th=[21103], 00:09:26.152 | 70.00th=[21627], 80.00th=[22152], 90.00th=[26084], 95.00th=[28443], 00:09:26.152 | 99.00th=[30540], 99.50th=[30540], 99.90th=[31065], 99.95th=[31327], 00:09:26.152 | 99.99th=[33424] 00:09:26.152 write: IOPS=3588, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:09:26.152 slat (usec): min=9, max=6934, avg=124.51, stdev=498.75 00:09:26.152 clat (usec): min=1382, max=31588, avg=17098.81, stdev=5212.61 00:09:26.152 lat (usec): min=1414, max=31607, avg=17223.32, stdev=5253.08 00:09:26.152 clat percentiles (usec): 00:09:26.152 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:09:26.152 | 30.00th=[11863], 40.00th=[16712], 50.00th=[19006], 60.00th=[19792], 00:09:26.152 | 70.00th=[21365], 80.00th=[21890], 90.00th=[22414], 95.00th=[23987], 00:09:26.152 | 99.00th=[26084], 99.50th=[26608], 99.90th=[31065], 99.95th=[31589], 00:09:26.152 | 99.99th=[31589] 00:09:26.152 bw ( KiB/s): min=12288, max=12288, per=16.43%, avg=12288.00, stdev= 0.00, samples=1 00:09:26.152 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:26.152 lat (msec) : 2=0.24%, 4=0.29%, 10=4.54%, 20=48.83%, 50=46.10% 00:09:26.152 cpu : usr=4.00%, sys=14.39%, ctx=763, majf=0, minf=13 00:09:26.152 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:26.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.152 issued rwts: total=3584,3596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.152 00:09:26.152 Run status group 0 (all jobs): 00:09:26.152 READ: bw=69.3MiB/s (72.7MB/s), 11.6MiB/s-22.0MiB/s (12.1MB/s-23.1MB/s), io=69.8MiB (73.2MB), run=1002-1007msec 00:09:26.152 WRITE: bw=73.1MiB/s (76.6MB/s), 11.9MiB/s-23.8MiB/s (12.5MB/s-25.0MB/s), io=73.6MiB (77.1MB), run=1002-1007msec 00:09:26.152 00:09:26.152 Disk stats (read/write): 00:09:26.152 nvme0n1: ios=2601/2575, merge=0/0, ticks=26332/24107, in_queue=50439, util=86.56% 00:09:26.152 nvme0n2: ios=5293/5632, merge=0/0, ticks=25338/19652, in_queue=44990, util=89.29% 00:09:26.152 nvme0n3: ios=4975/5120, merge=0/0, ticks=51668/45776, in_queue=97444, util=90.12% 00:09:26.152 nvme0n4: ios=2577/3011, merge=0/0, ticks=26143/23769, in_queue=49912, util=88.95% 00:09:26.152 20:45:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:26.152 20:45:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:26.152 20:45:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68592 00:09:26.152 20:45:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:26.152 [global] 00:09:26.152 thread=1 00:09:26.152 invalidate=1 00:09:26.152 rw=read 00:09:26.152 time_based=1 00:09:26.152 runtime=10 00:09:26.152 ioengine=libaio 00:09:26.152 direct=1 00:09:26.152 bs=4096 00:09:26.152 iodepth=1 00:09:26.152 norandommap=1 00:09:26.152 numjobs=1 00:09:26.152 00:09:26.152 [job0] 00:09:26.152 filename=/dev/nvme0n1 00:09:26.152 [job1] 00:09:26.152 filename=/dev/nvme0n2 00:09:26.152 [job2] 00:09:26.152 filename=/dev/nvme0n3 00:09:26.152 [job3] 00:09:26.152 filename=/dev/nvme0n4 00:09:26.152 Could not set queue depth (nvme0n1) 00:09:26.152 Could not set queue depth (nvme0n2) 00:09:26.152 Could not set queue depth (nvme0n3) 00:09:26.152 Could not set queue depth (nvme0n4) 00:09:26.152 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.152 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.152 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.152 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.152 fio-3.35 00:09:26.152 Starting 4 threads 00:09:29.434 20:45:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:29.434 fio: pid=68635, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:29.434 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=76914688, buflen=4096 00:09:29.434 20:45:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:29.434 fio: pid=68634, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:29.434 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=80338944, buflen=4096 00:09:29.434 20:45:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:29.434 20:45:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:29.693 fio: pid=68632, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:29.693 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=27107328, buflen=4096 00:09:29.693 20:45:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:29.693 20:45:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:29.693 fio: pid=68633, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:29.693 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=31309824, buflen=4096 00:09:29.693 20:45:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:29.693 20:45:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:29.693 00:09:29.693 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68632: Mon Jul 15 20:45:51 2024 00:09:29.693 read: IOPS=7000, BW=27.3MiB/s (28.7MB/s)(89.9MiB/3286msec) 00:09:29.693 slat (usec): min=6, max=11795, avg= 9.30, stdev=135.91 00:09:29.693 clat (usec): min=59, max=1747, avg=132.84, stdev=24.13 00:09:29.693 lat (usec): min=101, max=11993, avg=142.14, stdev=138.47 00:09:29.693 clat percentiles (usec): 00:09:29.693 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 125], 00:09:29.693 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:09:29.693 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 151], 00:09:29.693 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 206], 99.95th=[ 449], 00:09:29.693 | 99.99th=[ 1418] 00:09:29.693 bw ( KiB/s): min=26880, max=28600, per=28.76%, avg=28119.83, stdev=628.78, samples=6 00:09:29.693 iops : min= 6720, max= 7150, avg=7029.83, stdev=157.11, samples=6 00:09:29.693 lat (usec) : 100=0.06%, 250=99.85%, 500=0.05%, 750=0.01%, 1000=0.01% 00:09:29.693 lat (msec) : 2=0.02% 00:09:29.693 cpu : usr=1.52%, sys=4.66%, ctx=23012, majf=0, minf=1 00:09:29.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.693 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.693 issued rwts: total=23003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.693 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68633: Mon Jul 15 20:45:51 2024 00:09:29.693 read: IOPS=6875, BW=26.9MiB/s (28.2MB/s)(93.9MiB/3495msec) 00:09:29.693 slat (usec): min=6, max=11863, avg=11.50, stdev=141.79 00:09:29.693 clat (usec): min=89, max=2194, avg=133.16, stdev=24.65 00:09:29.693 lat (usec): min=99, max=12026, avg=144.67, stdev=144.27 00:09:29.693 clat percentiles (usec): 00:09:29.693 | 1.00th=[ 99], 5.00th=[ 113], 10.00th=[ 120], 20.00th=[ 125], 00:09:29.693 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:09:29.693 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:09:29.693 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 265], 99.95th=[ 347], 00:09:29.693 | 99.99th=[ 1516] 00:09:29.693 bw ( KiB/s): min=25600, max=28296, per=28.05%, avg=27426.50, stdev=1007.24, samples=6 00:09:29.693 iops : min= 6400, max= 7074, avg=6856.50, stdev=251.69, samples=6 00:09:29.693 lat (usec) : 100=1.32%, 250=98.56%, 500=0.09%, 1000=0.01% 00:09:29.693 lat (msec) : 2=0.01%, 4=0.01% 00:09:29.693 cpu : usr=0.94%, sys=6.33%, ctx=24038, majf=0, minf=1 00:09:29.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.693 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.693 issued rwts: total=24029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.693 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68634: Mon Jul 15 20:45:51 2024 00:09:29.693 read: IOPS=6339, BW=24.8MiB/s (26.0MB/s)(76.6MiB/3094msec) 00:09:29.693 slat (usec): min=6, max=11428, avg= 9.39, stdev=107.78 00:09:29.693 clat (usec): min=115, max=2923, avg=147.58, stdev=34.47 00:09:29.693 lat (usec): min=122, max=11587, avg=156.98, stdev=113.28 00:09:29.693 clat percentiles (usec): 00:09:29.693 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 137], 00:09:29.693 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:09:29.693 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 167], 00:09:29.693 | 99.00th=[ 188], 99.50th=[ 237], 99.90th=[ 529], 99.95th=[ 709], 00:09:29.693 | 99.99th=[ 1876] 00:09:29.693 bw ( KiB/s): min=24352, max=26056, per=25.95%, avg=25368.17, stdev=690.42, samples=6 00:09:29.693 iops : min= 6088, max= 6514, avg=6342.00, stdev=172.62, samples=6 00:09:29.693 lat (usec) : 250=99.59%, 500=0.30%, 750=0.06%, 1000=0.03% 00:09:29.693 lat (msec) : 2=0.02%, 4=0.01% 00:09:29.693 cpu : usr=1.10%, sys=5.04%, ctx=19619, majf=0, minf=1 00:09:29.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.693 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.693 issued rwts: total=19615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.693 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68635: Mon Jul 15 20:45:51 2024 00:09:29.693 read: IOPS=6497, BW=25.4MiB/s (26.6MB/s)(73.4MiB/2890msec) 00:09:29.693 slat (nsec): min=6881, max=74570, avg=7878.08, stdev=1901.22 00:09:29.693 clat (usec): min=115, max=1916, avg=145.33, stdev=30.12 00:09:29.693 lat (usec): min=122, max=1933, avg=153.21, stdev=30.32 00:09:29.693 clat percentiles (usec): 00:09:29.694 | 1.00th=[ 125], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:09:29.694 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:09:29.694 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:09:29.694 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 235], 99.95th=[ 461], 00:09:29.694 | 99.99th=[ 1893] 00:09:29.694 bw ( KiB/s): min=25816, max=26240, per=26.66%, avg=26061.40, stdev=165.40, samples=5 00:09:29.694 iops : min= 6454, max= 6560, avg=6515.20, stdev=41.25, samples=5 00:09:29.694 lat (usec) : 250=99.91%, 500=0.04%, 750=0.01%, 1000=0.01% 00:09:29.694 lat (msec) : 2=0.03% 00:09:29.694 cpu : usr=0.93%, sys=4.74%, ctx=18782, majf=0, minf=2 00:09:29.694 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.694 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.694 issued rwts: total=18779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.694 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.694 00:09:29.694 Run status group 0 (all jobs): 00:09:29.694 READ: bw=95.5MiB/s (100MB/s), 24.8MiB/s-27.3MiB/s (26.0MB/s-28.7MB/s), io=334MiB (350MB), run=2890-3495msec 00:09:29.694 00:09:29.694 Disk stats (read/write): 00:09:29.694 nvme0n1: ios=21825/0, merge=0/0, ticks=2907/0, in_queue=2907, util=95.10% 00:09:29.694 nvme0n2: ios=23027/0, merge=0/0, ticks=3128/0, in_queue=3128, util=95.33% 00:09:29.694 nvme0n3: ios=18375/0, merge=0/0, ticks=2738/0, in_queue=2738, util=96.67% 00:09:29.694 nvme0n4: ios=18693/0, merge=0/0, ticks=2741/0, in_queue=2741, util=96.74% 00:09:29.951 20:45:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:29.951 20:45:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:30.209 20:45:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.209 20:45:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:30.467 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.467 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68592 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.725 nvmf hotplug test: fio failed as expected 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:30.725 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.984 rmmod nvme_tcp 00:09:30.984 rmmod nvme_fabrics 00:09:30.984 rmmod nvme_keyring 00:09:30.984 20:45:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68217 ']' 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68217 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68217 ']' 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68217 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68217 00:09:31.243 killing process with pid 68217 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68217' 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68217 00:09:31.243 20:45:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68217 00:09:31.243 20:45:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.243 20:45:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.243 20:45:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.243 20:45:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.243 20:45:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.243 20:45:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.243 20:45:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.243 20:45:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.500 20:45:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:31.500 00:09:31.500 real 0m18.018s 00:09:31.500 user 1m5.864s 00:09:31.500 sys 0m10.934s 00:09:31.500 20:45:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.500 ************************************ 00:09:31.500 END TEST nvmf_fio_target 00:09:31.500 ************************************ 00:09:31.500 20:45:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.500 20:45:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:31.500 20:45:53 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:31.500 20:45:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:31.500 20:45:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.500 20:45:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:31.500 ************************************ 00:09:31.500 START TEST nvmf_bdevio 00:09:31.500 ************************************ 00:09:31.500 20:45:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:31.500 * Looking for test storage... 00:09:31.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.500 20:45:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:31.500 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:31.500 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.500 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.500 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.500 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.500 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.501 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.790 20:45:53 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.790 20:45:53 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.790 20:45:53 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.790 20:45:53 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.790 20:45:53 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.790 20:45:53 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.790 20:45:53 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:31.790 20:45:53 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.790 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:31.790 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.790 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:31.791 Cannot find device "nvmf_tgt_br" 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.791 Cannot find device "nvmf_tgt_br2" 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:31.791 Cannot find device "nvmf_tgt_br" 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:31.791 Cannot find device "nvmf_tgt_br2" 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.791 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:32.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:32.050 00:09:32.050 --- 10.0.0.2 ping statistics --- 00:09:32.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.050 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:32.050 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:32.050 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:09:32.050 00:09:32.050 --- 10.0.0.3 ping statistics --- 00:09:32.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.050 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:32.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:32.050 00:09:32.050 --- 10.0.0.1 ping statistics --- 00:09:32.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.050 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68894 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68894 00:09:32.050 20:45:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 68894 ']' 00:09:32.051 20:45:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.051 20:45:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.051 20:45:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.051 20:45:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.051 20:45:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.051 [2024-07-15 20:45:53.902994] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:09:32.051 [2024-07-15 20:45:53.903056] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.309 [2024-07-15 20:45:54.044440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.309 [2024-07-15 20:45:54.133701] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.309 [2024-07-15 20:45:54.133748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.309 [2024-07-15 20:45:54.133758] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.309 [2024-07-15 20:45:54.133766] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.309 [2024-07-15 20:45:54.133773] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.309 [2024-07-15 20:45:54.133975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:32.309 [2024-07-15 20:45:54.134149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:32.309 [2024-07-15 20:45:54.134446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:32.309 [2024-07-15 20:45:54.134563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.309 [2024-07-15 20:45:54.175795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:32.875 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.875 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:09:32.875 20:45:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.875 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:32.875 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.875 20:45:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.875 20:45:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.875 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.875 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.133 [2024-07-15 20:45:54.786852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.133 Malloc0 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.133 [2024-07-15 20:45:54.851432] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:33.133 { 00:09:33.133 "params": { 00:09:33.133 "name": "Nvme$subsystem", 00:09:33.133 "trtype": "$TEST_TRANSPORT", 00:09:33.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.133 "adrfam": "ipv4", 00:09:33.133 "trsvcid": "$NVMF_PORT", 00:09:33.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.133 "hdgst": ${hdgst:-false}, 00:09:33.133 "ddgst": ${ddgst:-false} 00:09:33.133 }, 00:09:33.133 "method": "bdev_nvme_attach_controller" 00:09:33.133 } 00:09:33.133 EOF 00:09:33.133 )") 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:33.133 20:45:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:33.133 "params": { 00:09:33.133 "name": "Nvme1", 00:09:33.133 "trtype": "tcp", 00:09:33.133 "traddr": "10.0.0.2", 00:09:33.133 "adrfam": "ipv4", 00:09:33.133 "trsvcid": "4420", 00:09:33.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.133 "hdgst": false, 00:09:33.133 "ddgst": false 00:09:33.133 }, 00:09:33.133 "method": "bdev_nvme_attach_controller" 00:09:33.133 }' 00:09:33.133 [2024-07-15 20:45:54.903410] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:09:33.133 [2024-07-15 20:45:54.903760] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68930 ] 00:09:33.390 [2024-07-15 20:45:55.045687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:33.390 [2024-07-15 20:45:55.125300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.390 [2024-07-15 20:45:55.125496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.390 [2024-07-15 20:45:55.125558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.390 [2024-07-15 20:45:55.175428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.390 I/O targets: 00:09:33.390 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:33.390 00:09:33.390 00:09:33.390 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.390 http://cunit.sourceforge.net/ 00:09:33.390 00:09:33.390 00:09:33.390 Suite: bdevio tests on: Nvme1n1 00:09:33.390 Test: blockdev write read block ...passed 00:09:33.390 Test: blockdev write zeroes read block ...passed 00:09:33.390 Test: blockdev write zeroes read no split ...passed 00:09:33.647 Test: blockdev write zeroes read split ...passed 00:09:33.647 Test: blockdev write zeroes read split partial ...passed 00:09:33.647 Test: blockdev reset ...[2024-07-15 20:45:55.309494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:33.647 [2024-07-15 20:45:55.309565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e197c0 (9): Bad file descriptor 00:09:33.647 passed 00:09:33.647 Test: blockdev write read 8 blocks ...[2024-07-15 20:45:55.322994] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:33.647 passed 00:09:33.647 Test: blockdev write read size > 128k ...passed 00:09:33.648 Test: blockdev write read invalid size ...passed 00:09:33.648 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:33.648 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:33.648 Test: blockdev write read max offset ...passed 00:09:33.648 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:33.648 Test: blockdev writev readv 8 blocks ...passed 00:09:33.648 Test: blockdev writev readv 30 x 1block ...passed 00:09:33.648 Test: blockdev writev readv block ...passed 00:09:33.648 Test: blockdev writev readv size > 128k ...passed 00:09:33.648 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:33.648 Test: blockdev comparev and writev ...[2024-07-15 20:45:55.328566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.648 [2024-07-15 20:45:55.328599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:33.648 [2024-07-15 20:45:55.328615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.648 [2024-07-15 20:45:55.328625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:33.648 passed 00:09:33.648 Test: blockdev nvme passthru rw ...passed 00:09:33.648 Test: blockdev nvme passthru vendor specific ...passed 00:09:33.648 Test: blockdev nvme admin passthru ...[2024-07-15 20:45:55.329004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.648 [2024-07-15 20:45:55.329019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:33.648 [2024-07-15 20:45:55.329032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.648 [2024-07-15 20:45:55.329041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:33.648 [2024-07-15 20:45:55.329284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.648 [2024-07-15 20:45:55.329296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:33.648 [2024-07-15 20:45:55.329310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.648 [2024-07-15 20:45:55.329318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:33.648 [2024-07-15 20:45:55.329541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.648 [2024-07-15 20:45:55.329552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:33.648 [2024-07-15 20:45:55.329567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.648 [2024-07-15 20:45:55.329575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:33.648 [2024-07-15 20:45:55.330206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.648 [2024-07-15 20:45:55.330221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:33.648 [2024-07-15 20:45:55.330300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.648 [2024-07-15 20:45:55.330311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:33.648 [2024-07-15 20:45:55.330391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.648 [2024-07-15 20:45:55.330402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:33.648 [2024-07-15 20:45:55.330476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.648 [2024-07-15 20:45:55.330487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:33.648 passed 00:09:33.648 Test: blockdev copy ...passed 00:09:33.648 00:09:33.648 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.648 suites 1 1 n/a 0 0 00:09:33.648 tests 23 23 23 0 0 00:09:33.648 asserts 152 152 152 0 n/a 00:09:33.648 00:09:33.648 Elapsed time = 0.136 seconds 00:09:33.648 20:45:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.648 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.648 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.648 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.648 20:45:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:33.648 20:45:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:33.648 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.648 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.907 rmmod nvme_tcp 00:09:33.907 rmmod nvme_fabrics 00:09:33.907 rmmod nvme_keyring 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68894 ']' 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68894 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 68894 ']' 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 68894 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68894 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:09:33.907 killing process with pid 68894 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68894' 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 68894 00:09:33.907 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 68894 00:09:34.166 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.166 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.166 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.166 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.166 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.166 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.166 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.166 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.166 20:45:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:34.166 00:09:34.166 real 0m2.732s 00:09:34.166 user 0m8.146s 00:09:34.166 sys 0m0.826s 00:09:34.166 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.166 20:45:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:34.166 ************************************ 00:09:34.166 END TEST nvmf_bdevio 00:09:34.166 ************************************ 00:09:34.166 20:45:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:34.166 20:45:56 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:34.166 20:45:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:34.166 20:45:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.166 20:45:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.166 ************************************ 00:09:34.166 START TEST nvmf_auth_target 00:09:34.166 ************************************ 00:09:34.166 20:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:34.426 * Looking for test storage... 00:09:34.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.426 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:34.427 Cannot find device "nvmf_tgt_br" 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.427 Cannot find device "nvmf_tgt_br2" 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:34.427 Cannot find device "nvmf_tgt_br" 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:09:34.427 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:34.427 Cannot find device "nvmf_tgt_br2" 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:34.686 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:34.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:09:34.945 00:09:34.945 --- 10.0.0.2 ping statistics --- 00:09:34.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.945 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:34.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:34.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:09:34.945 00:09:34.945 --- 10.0.0.3 ping statistics --- 00:09:34.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.945 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:34.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:09:34.945 00:09:34.945 --- 10.0.0.1 ping statistics --- 00:09:34.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.945 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69109 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69109 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69109 ']' 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.945 20:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69136 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ae6beea647e47cf372581cabb7dcd0109db53f9a913bc2df 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ZEf 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ae6beea647e47cf372581cabb7dcd0109db53f9a913bc2df 0 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ae6beea647e47cf372581cabb7dcd0109db53f9a913bc2df 0 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ae6beea647e47cf372581cabb7dcd0109db53f9a913bc2df 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ZEf 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ZEf 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ZEf 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4c919bc19264e867901129d000f7d02c19126200ed7abe9759b572b299a30ac9 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Wa0 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4c919bc19264e867901129d000f7d02c19126200ed7abe9759b572b299a30ac9 3 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4c919bc19264e867901129d000f7d02c19126200ed7abe9759b572b299a30ac9 3 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4c919bc19264e867901129d000f7d02c19126200ed7abe9759b572b299a30ac9 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:35.958 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Wa0 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Wa0 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Wa0 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6e4b26135d62f370cf6961e6f9e12c18 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.w4g 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6e4b26135d62f370cf6961e6f9e12c18 1 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6e4b26135d62f370cf6961e6f9e12c18 1 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6e4b26135d62f370cf6961e6f9e12c18 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.w4g 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.w4g 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.w4g 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7b20fd635b5c77b4f27ba60e0ddb0d902e2a965c81d597ac 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gGF 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7b20fd635b5c77b4f27ba60e0ddb0d902e2a965c81d597ac 2 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7b20fd635b5c77b4f27ba60e0ddb0d902e2a965c81d597ac 2 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7b20fd635b5c77b4f27ba60e0ddb0d902e2a965c81d597ac 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gGF 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gGF 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.gGF 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:36.217 20:45:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ea0c23527145faabbc12f0a86cabc8cc02c842e5e895a169 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.cUr 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ea0c23527145faabbc12f0a86cabc8cc02c842e5e895a169 2 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ea0c23527145faabbc12f0a86cabc8cc02c842e5e895a169 2 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ea0c23527145faabbc12f0a86cabc8cc02c842e5e895a169 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.cUr 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.cUr 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.cUr 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d3210845110752edcc366ec062dcbad1 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:36.217 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9h7 00:09:36.218 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d3210845110752edcc366ec062dcbad1 1 00:09:36.218 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d3210845110752edcc366ec062dcbad1 1 00:09:36.218 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:36.218 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:36.218 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d3210845110752edcc366ec062dcbad1 00:09:36.218 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:36.218 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9h7 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9h7 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.9h7 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3ea395be60e451bd46dd182f46ed050cf6d3a184ddc74fdee078ef6269fb2ddb 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5k6 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3ea395be60e451bd46dd182f46ed050cf6d3a184ddc74fdee078ef6269fb2ddb 3 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3ea395be60e451bd46dd182f46ed050cf6d3a184ddc74fdee078ef6269fb2ddb 3 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3ea395be60e451bd46dd182f46ed050cf6d3a184ddc74fdee078ef6269fb2ddb 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5k6 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5k6 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.5k6 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69109 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69109 ']' 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.477 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69136 /var/tmp/host.sock 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69136 ']' 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ZEf 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ZEf 00:09:36.736 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ZEf 00:09:36.995 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Wa0 ]] 00:09:36.995 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Wa0 00:09:36.995 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.995 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.995 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.995 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Wa0 00:09:36.995 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Wa0 00:09:37.253 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:37.253 20:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.w4g 00:09:37.253 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.253 20:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.253 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.253 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.w4g 00:09:37.253 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.w4g 00:09:37.512 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.gGF ]] 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gGF 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gGF 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gGF 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.cUr 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.cUr 00:09:37.513 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.cUr 00:09:37.772 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.9h7 ]] 00:09:37.772 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9h7 00:09:37.772 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.772 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.772 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.772 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9h7 00:09:37.772 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9h7 00:09:38.031 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:38.031 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5k6 00:09:38.031 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.031 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.031 20:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.031 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.5k6 00:09:38.032 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.5k6 00:09:38.291 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:09:38.291 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:09:38.291 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:38.291 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:38.291 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:38.291 20:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:38.291 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:38.550 00:09:38.550 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:38.550 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:38.550 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:38.809 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:38.809 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:38.809 20:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.809 20:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.809 20:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.809 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:38.809 { 00:09:38.809 "cntlid": 1, 00:09:38.809 "qid": 0, 00:09:38.809 "state": "enabled", 00:09:38.809 "thread": "nvmf_tgt_poll_group_000", 00:09:38.809 "listen_address": { 00:09:38.809 "trtype": "TCP", 00:09:38.809 "adrfam": "IPv4", 00:09:38.809 "traddr": "10.0.0.2", 00:09:38.809 "trsvcid": "4420" 00:09:38.809 }, 00:09:38.809 "peer_address": { 00:09:38.809 "trtype": "TCP", 00:09:38.809 "adrfam": "IPv4", 00:09:38.809 "traddr": "10.0.0.1", 00:09:38.809 "trsvcid": "46638" 00:09:38.809 }, 00:09:38.809 "auth": { 00:09:38.809 "state": "completed", 00:09:38.809 "digest": "sha256", 00:09:38.809 "dhgroup": "null" 00:09:38.809 } 00:09:38.809 } 00:09:38.809 ]' 00:09:38.809 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:38.809 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:38.809 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:38.809 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:38.809 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:39.067 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:39.067 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:39.067 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:39.067 20:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:43.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.333 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:43.334 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:43.334 00:09:43.334 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:43.334 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:43.334 20:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:43.334 { 00:09:43.334 "cntlid": 3, 00:09:43.334 "qid": 0, 00:09:43.334 "state": "enabled", 00:09:43.334 "thread": "nvmf_tgt_poll_group_000", 00:09:43.334 "listen_address": { 00:09:43.334 "trtype": "TCP", 00:09:43.334 "adrfam": "IPv4", 00:09:43.334 "traddr": "10.0.0.2", 00:09:43.334 "trsvcid": "4420" 00:09:43.334 }, 00:09:43.334 "peer_address": { 00:09:43.334 "trtype": "TCP", 00:09:43.334 "adrfam": "IPv4", 00:09:43.334 "traddr": "10.0.0.1", 00:09:43.334 "trsvcid": "46660" 00:09:43.334 }, 00:09:43.334 "auth": { 00:09:43.334 "state": "completed", 00:09:43.334 "digest": "sha256", 00:09:43.334 "dhgroup": "null" 00:09:43.334 } 00:09:43.334 } 00:09:43.334 ]' 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:43.334 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:43.593 20:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:09:44.160 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:44.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:44.160 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:44.160 20:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.160 20:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.160 20:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.160 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:44.160 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:44.160 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:44.419 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:44.679 00:09:44.679 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:44.679 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:44.679 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:44.938 { 00:09:44.938 "cntlid": 5, 00:09:44.938 "qid": 0, 00:09:44.938 "state": "enabled", 00:09:44.938 "thread": "nvmf_tgt_poll_group_000", 00:09:44.938 "listen_address": { 00:09:44.938 "trtype": "TCP", 00:09:44.938 "adrfam": "IPv4", 00:09:44.938 "traddr": "10.0.0.2", 00:09:44.938 "trsvcid": "4420" 00:09:44.938 }, 00:09:44.938 "peer_address": { 00:09:44.938 "trtype": "TCP", 00:09:44.938 "adrfam": "IPv4", 00:09:44.938 "traddr": "10.0.0.1", 00:09:44.938 "trsvcid": "46678" 00:09:44.938 }, 00:09:44.938 "auth": { 00:09:44.938 "state": "completed", 00:09:44.938 "digest": "sha256", 00:09:44.938 "dhgroup": "null" 00:09:44.938 } 00:09:44.938 } 00:09:44.938 ]' 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:44.938 20:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:45.197 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:09:45.766 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:45.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:45.766 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:45.766 20:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.766 20:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.766 20:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.766 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:45.766 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:45.766 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:46.025 20:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:46.284 00:09:46.284 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:46.284 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:46.284 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:46.543 { 00:09:46.543 "cntlid": 7, 00:09:46.543 "qid": 0, 00:09:46.543 "state": "enabled", 00:09:46.543 "thread": "nvmf_tgt_poll_group_000", 00:09:46.543 "listen_address": { 00:09:46.543 "trtype": "TCP", 00:09:46.543 "adrfam": "IPv4", 00:09:46.543 "traddr": "10.0.0.2", 00:09:46.543 "trsvcid": "4420" 00:09:46.543 }, 00:09:46.543 "peer_address": { 00:09:46.543 "trtype": "TCP", 00:09:46.543 "adrfam": "IPv4", 00:09:46.543 "traddr": "10.0.0.1", 00:09:46.543 "trsvcid": "37062" 00:09:46.543 }, 00:09:46.543 "auth": { 00:09:46.543 "state": "completed", 00:09:46.543 "digest": "sha256", 00:09:46.543 "dhgroup": "null" 00:09:46.543 } 00:09:46.543 } 00:09:46.543 ]' 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.543 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:46.802 20:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:09:47.371 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:47.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:47.371 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:47.371 20:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.371 20:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.371 20:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.371 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:47.371 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:47.371 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:47.371 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:47.630 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:47.890 00:09:47.890 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:47.890 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:47.890 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:47.890 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:47.890 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:47.890 20:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.890 20:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.149 20:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.149 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:48.149 { 00:09:48.149 "cntlid": 9, 00:09:48.149 "qid": 0, 00:09:48.149 "state": "enabled", 00:09:48.149 "thread": "nvmf_tgt_poll_group_000", 00:09:48.149 "listen_address": { 00:09:48.149 "trtype": "TCP", 00:09:48.149 "adrfam": "IPv4", 00:09:48.149 "traddr": "10.0.0.2", 00:09:48.149 "trsvcid": "4420" 00:09:48.149 }, 00:09:48.149 "peer_address": { 00:09:48.149 "trtype": "TCP", 00:09:48.149 "adrfam": "IPv4", 00:09:48.149 "traddr": "10.0.0.1", 00:09:48.149 "trsvcid": "37082" 00:09:48.149 }, 00:09:48.149 "auth": { 00:09:48.149 "state": "completed", 00:09:48.149 "digest": "sha256", 00:09:48.149 "dhgroup": "ffdhe2048" 00:09:48.149 } 00:09:48.149 } 00:09:48.149 ]' 00:09:48.149 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:48.149 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:48.149 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:48.149 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:48.149 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:48.149 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:48.149 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:48.149 20:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:48.408 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:09:48.975 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:48.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:48.975 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:48.975 20:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.975 20:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.975 20:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.975 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:48.975 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:48.975 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:49.234 20:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:49.493 00:09:49.493 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:49.493 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:49.493 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:49.493 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:49.493 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:49.493 20:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.493 20:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.493 20:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.493 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:49.493 { 00:09:49.493 "cntlid": 11, 00:09:49.493 "qid": 0, 00:09:49.493 "state": "enabled", 00:09:49.493 "thread": "nvmf_tgt_poll_group_000", 00:09:49.493 "listen_address": { 00:09:49.493 "trtype": "TCP", 00:09:49.493 "adrfam": "IPv4", 00:09:49.493 "traddr": "10.0.0.2", 00:09:49.493 "trsvcid": "4420" 00:09:49.493 }, 00:09:49.493 "peer_address": { 00:09:49.493 "trtype": "TCP", 00:09:49.493 "adrfam": "IPv4", 00:09:49.493 "traddr": "10.0.0.1", 00:09:49.493 "trsvcid": "37110" 00:09:49.494 }, 00:09:49.494 "auth": { 00:09:49.494 "state": "completed", 00:09:49.494 "digest": "sha256", 00:09:49.494 "dhgroup": "ffdhe2048" 00:09:49.494 } 00:09:49.494 } 00:09:49.494 ]' 00:09:49.494 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:49.752 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:49.752 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:49.752 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:49.752 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:49.752 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:49.752 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:49.752 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:50.009 20:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:09:50.646 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:50.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:50.646 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:50.646 20:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.646 20:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.646 20:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:50.647 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:50.904 00:09:50.904 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:50.905 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:50.905 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:51.162 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:51.162 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:51.162 20:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.162 20:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.162 20:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.162 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:51.162 { 00:09:51.162 "cntlid": 13, 00:09:51.162 "qid": 0, 00:09:51.162 "state": "enabled", 00:09:51.162 "thread": "nvmf_tgt_poll_group_000", 00:09:51.162 "listen_address": { 00:09:51.162 "trtype": "TCP", 00:09:51.162 "adrfam": "IPv4", 00:09:51.162 "traddr": "10.0.0.2", 00:09:51.162 "trsvcid": "4420" 00:09:51.162 }, 00:09:51.162 "peer_address": { 00:09:51.162 "trtype": "TCP", 00:09:51.162 "adrfam": "IPv4", 00:09:51.162 "traddr": "10.0.0.1", 00:09:51.162 "trsvcid": "37142" 00:09:51.162 }, 00:09:51.162 "auth": { 00:09:51.162 "state": "completed", 00:09:51.162 "digest": "sha256", 00:09:51.162 "dhgroup": "ffdhe2048" 00:09:51.162 } 00:09:51.162 } 00:09:51.162 ]' 00:09:51.162 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:51.162 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:51.162 20:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:51.162 20:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:51.162 20:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:51.162 20:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:51.162 20:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:51.162 20:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:51.419 20:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:09:51.984 20:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:51.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:51.984 20:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:51.984 20:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.984 20:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.984 20:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.984 20:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:51.984 20:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:51.984 20:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:52.242 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:52.500 00:09:52.500 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:52.500 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.500 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:52.757 { 00:09:52.757 "cntlid": 15, 00:09:52.757 "qid": 0, 00:09:52.757 "state": "enabled", 00:09:52.757 "thread": "nvmf_tgt_poll_group_000", 00:09:52.757 "listen_address": { 00:09:52.757 "trtype": "TCP", 00:09:52.757 "adrfam": "IPv4", 00:09:52.757 "traddr": "10.0.0.2", 00:09:52.757 "trsvcid": "4420" 00:09:52.757 }, 00:09:52.757 "peer_address": { 00:09:52.757 "trtype": "TCP", 00:09:52.757 "adrfam": "IPv4", 00:09:52.757 "traddr": "10.0.0.1", 00:09:52.757 "trsvcid": "37174" 00:09:52.757 }, 00:09:52.757 "auth": { 00:09:52.757 "state": "completed", 00:09:52.757 "digest": "sha256", 00:09:52.757 "dhgroup": "ffdhe2048" 00:09:52.757 } 00:09:52.757 } 00:09:52.757 ]' 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.757 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:53.015 20:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:09:53.581 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.581 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:53.581 20:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.581 20:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.581 20:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.581 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:53.581 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:53.581 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:53.581 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:53.839 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:54.100 00:09:54.100 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:54.100 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:54.100 20:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:54.359 { 00:09:54.359 "cntlid": 17, 00:09:54.359 "qid": 0, 00:09:54.359 "state": "enabled", 00:09:54.359 "thread": "nvmf_tgt_poll_group_000", 00:09:54.359 "listen_address": { 00:09:54.359 "trtype": "TCP", 00:09:54.359 "adrfam": "IPv4", 00:09:54.359 "traddr": "10.0.0.2", 00:09:54.359 "trsvcid": "4420" 00:09:54.359 }, 00:09:54.359 "peer_address": { 00:09:54.359 "trtype": "TCP", 00:09:54.359 "adrfam": "IPv4", 00:09:54.359 "traddr": "10.0.0.1", 00:09:54.359 "trsvcid": "37196" 00:09:54.359 }, 00:09:54.359 "auth": { 00:09:54.359 "state": "completed", 00:09:54.359 "digest": "sha256", 00:09:54.359 "dhgroup": "ffdhe3072" 00:09:54.359 } 00:09:54.359 } 00:09:54.359 ]' 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.359 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.618 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:09:55.186 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.186 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:55.186 20:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.186 20:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.186 20:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.186 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:55.186 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:55.186 20:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.445 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.704 00:09:55.704 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:55.704 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:55.704 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:55.704 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:55.704 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:55.704 20:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.704 20:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.964 20:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.964 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:55.964 { 00:09:55.964 "cntlid": 19, 00:09:55.964 "qid": 0, 00:09:55.964 "state": "enabled", 00:09:55.964 "thread": "nvmf_tgt_poll_group_000", 00:09:55.964 "listen_address": { 00:09:55.964 "trtype": "TCP", 00:09:55.964 "adrfam": "IPv4", 00:09:55.964 "traddr": "10.0.0.2", 00:09:55.964 "trsvcid": "4420" 00:09:55.964 }, 00:09:55.964 "peer_address": { 00:09:55.964 "trtype": "TCP", 00:09:55.964 "adrfam": "IPv4", 00:09:55.964 "traddr": "10.0.0.1", 00:09:55.964 "trsvcid": "47944" 00:09:55.964 }, 00:09:55.964 "auth": { 00:09:55.964 "state": "completed", 00:09:55.964 "digest": "sha256", 00:09:55.964 "dhgroup": "ffdhe3072" 00:09:55.964 } 00:09:55.964 } 00:09:55.964 ]' 00:09:55.964 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:55.964 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:55.964 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:55.964 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:55.964 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:55.964 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:55.964 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:55.964 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.223 20:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:56.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:56.790 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.357 00:09:57.358 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:57.358 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:57.358 20:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.358 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.358 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.358 20:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.358 20:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.358 20:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.358 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:57.358 { 00:09:57.358 "cntlid": 21, 00:09:57.358 "qid": 0, 00:09:57.358 "state": "enabled", 00:09:57.358 "thread": "nvmf_tgt_poll_group_000", 00:09:57.358 "listen_address": { 00:09:57.358 "trtype": "TCP", 00:09:57.358 "adrfam": "IPv4", 00:09:57.358 "traddr": "10.0.0.2", 00:09:57.358 "trsvcid": "4420" 00:09:57.358 }, 00:09:57.358 "peer_address": { 00:09:57.358 "trtype": "TCP", 00:09:57.358 "adrfam": "IPv4", 00:09:57.358 "traddr": "10.0.0.1", 00:09:57.358 "trsvcid": "47978" 00:09:57.358 }, 00:09:57.358 "auth": { 00:09:57.358 "state": "completed", 00:09:57.358 "digest": "sha256", 00:09:57.358 "dhgroup": "ffdhe3072" 00:09:57.358 } 00:09:57.358 } 00:09:57.358 ]' 00:09:57.358 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:57.358 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:57.358 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:57.615 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:57.615 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:57.615 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:57.615 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:57.615 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:57.615 20:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:09:58.183 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.183 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:58.183 20:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.183 20:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.183 20:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.183 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:58.183 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:58.183 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:58.441 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:58.700 00:09:58.700 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:58.700 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:58.700 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:58.958 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:58.958 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:58.959 20:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.959 20:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.959 20:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.959 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:58.959 { 00:09:58.959 "cntlid": 23, 00:09:58.959 "qid": 0, 00:09:58.959 "state": "enabled", 00:09:58.959 "thread": "nvmf_tgt_poll_group_000", 00:09:58.959 "listen_address": { 00:09:58.959 "trtype": "TCP", 00:09:58.959 "adrfam": "IPv4", 00:09:58.959 "traddr": "10.0.0.2", 00:09:58.959 "trsvcid": "4420" 00:09:58.959 }, 00:09:58.959 "peer_address": { 00:09:58.959 "trtype": "TCP", 00:09:58.959 "adrfam": "IPv4", 00:09:58.959 "traddr": "10.0.0.1", 00:09:58.959 "trsvcid": "48022" 00:09:58.959 }, 00:09:58.959 "auth": { 00:09:58.959 "state": "completed", 00:09:58.959 "digest": "sha256", 00:09:58.959 "dhgroup": "ffdhe3072" 00:09:58.959 } 00:09:58.959 } 00:09:58.959 ]' 00:09:58.959 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:58.959 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:58.959 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:59.218 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:59.218 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:59.218 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.218 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.218 20:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:59.218 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:09:59.786 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:59.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:59.786 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:09:59.786 20:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.786 20:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.786 20:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.786 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:59.786 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:59.786 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:59.786 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:00.045 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:00.045 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:00.045 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:00.045 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:00.045 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:00.045 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:00.045 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.045 20:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.045 20:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.046 20:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.046 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.046 20:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.304 00:10:00.304 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:00.304 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:00.304 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.564 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.564 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.564 20:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.564 20:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.564 20:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.564 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:00.564 { 00:10:00.564 "cntlid": 25, 00:10:00.564 "qid": 0, 00:10:00.564 "state": "enabled", 00:10:00.564 "thread": "nvmf_tgt_poll_group_000", 00:10:00.564 "listen_address": { 00:10:00.564 "trtype": "TCP", 00:10:00.564 "adrfam": "IPv4", 00:10:00.564 "traddr": "10.0.0.2", 00:10:00.564 "trsvcid": "4420" 00:10:00.564 }, 00:10:00.564 "peer_address": { 00:10:00.564 "trtype": "TCP", 00:10:00.564 "adrfam": "IPv4", 00:10:00.564 "traddr": "10.0.0.1", 00:10:00.564 "trsvcid": "48034" 00:10:00.564 }, 00:10:00.564 "auth": { 00:10:00.564 "state": "completed", 00:10:00.564 "digest": "sha256", 00:10:00.564 "dhgroup": "ffdhe4096" 00:10:00.564 } 00:10:00.564 } 00:10:00.564 ]' 00:10:00.564 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:00.564 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:00.564 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:00.564 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:00.564 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:00.823 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.823 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.823 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.823 20:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:10:01.423 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:01.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:01.423 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:01.423 20:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.423 20:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.423 20:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.423 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:01.423 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:01.423 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.682 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.940 00:10:01.940 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:01.940 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.940 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:02.199 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:02.199 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:02.199 20:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.199 20:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.199 20:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.199 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:02.199 { 00:10:02.199 "cntlid": 27, 00:10:02.199 "qid": 0, 00:10:02.199 "state": "enabled", 00:10:02.199 "thread": "nvmf_tgt_poll_group_000", 00:10:02.199 "listen_address": { 00:10:02.199 "trtype": "TCP", 00:10:02.199 "adrfam": "IPv4", 00:10:02.199 "traddr": "10.0.0.2", 00:10:02.199 "trsvcid": "4420" 00:10:02.199 }, 00:10:02.199 "peer_address": { 00:10:02.199 "trtype": "TCP", 00:10:02.199 "adrfam": "IPv4", 00:10:02.199 "traddr": "10.0.0.1", 00:10:02.199 "trsvcid": "48066" 00:10:02.199 }, 00:10:02.199 "auth": { 00:10:02.199 "state": "completed", 00:10:02.199 "digest": "sha256", 00:10:02.199 "dhgroup": "ffdhe4096" 00:10:02.199 } 00:10:02.199 } 00:10:02.199 ]' 00:10:02.199 20:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:02.199 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:02.199 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:02.199 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:02.199 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:02.199 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:02.199 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:02.199 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.458 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:10:03.026 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:03.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:03.026 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:03.026 20:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.026 20:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.026 20:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.026 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:03.026 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:03.026 20:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.284 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.543 00:10:03.543 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:03.543 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:03.543 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:03.803 { 00:10:03.803 "cntlid": 29, 00:10:03.803 "qid": 0, 00:10:03.803 "state": "enabled", 00:10:03.803 "thread": "nvmf_tgt_poll_group_000", 00:10:03.803 "listen_address": { 00:10:03.803 "trtype": "TCP", 00:10:03.803 "adrfam": "IPv4", 00:10:03.803 "traddr": "10.0.0.2", 00:10:03.803 "trsvcid": "4420" 00:10:03.803 }, 00:10:03.803 "peer_address": { 00:10:03.803 "trtype": "TCP", 00:10:03.803 "adrfam": "IPv4", 00:10:03.803 "traddr": "10.0.0.1", 00:10:03.803 "trsvcid": "48094" 00:10:03.803 }, 00:10:03.803 "auth": { 00:10:03.803 "state": "completed", 00:10:03.803 "digest": "sha256", 00:10:03.803 "dhgroup": "ffdhe4096" 00:10:03.803 } 00:10:03.803 } 00:10:03.803 ]' 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:03.803 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.061 20:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:10:04.639 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:04.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:04.639 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:04.639 20:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.639 20:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.639 20:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.639 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:04.639 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:04.639 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:04.899 20:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:05.158 00:10:05.158 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:05.158 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.158 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:05.418 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.418 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.418 20:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.418 20:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.418 20:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.418 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:05.418 { 00:10:05.418 "cntlid": 31, 00:10:05.418 "qid": 0, 00:10:05.418 "state": "enabled", 00:10:05.418 "thread": "nvmf_tgt_poll_group_000", 00:10:05.418 "listen_address": { 00:10:05.418 "trtype": "TCP", 00:10:05.418 "adrfam": "IPv4", 00:10:05.418 "traddr": "10.0.0.2", 00:10:05.418 "trsvcid": "4420" 00:10:05.418 }, 00:10:05.418 "peer_address": { 00:10:05.418 "trtype": "TCP", 00:10:05.418 "adrfam": "IPv4", 00:10:05.418 "traddr": "10.0.0.1", 00:10:05.418 "trsvcid": "48128" 00:10:05.418 }, 00:10:05.418 "auth": { 00:10:05.418 "state": "completed", 00:10:05.418 "digest": "sha256", 00:10:05.418 "dhgroup": "ffdhe4096" 00:10:05.418 } 00:10:05.418 } 00:10:05.418 ]' 00:10:05.418 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:05.418 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:05.418 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:05.418 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:05.418 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:05.677 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:05.677 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:05.677 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.677 20:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:10:06.246 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.246 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:06.246 20:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.246 20:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:06.505 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.072 00:10:07.072 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:07.072 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.072 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:07.072 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:07.072 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:07.072 20:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.072 20:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.072 20:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.072 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:07.072 { 00:10:07.072 "cntlid": 33, 00:10:07.072 "qid": 0, 00:10:07.072 "state": "enabled", 00:10:07.072 "thread": "nvmf_tgt_poll_group_000", 00:10:07.072 "listen_address": { 00:10:07.072 "trtype": "TCP", 00:10:07.072 "adrfam": "IPv4", 00:10:07.072 "traddr": "10.0.0.2", 00:10:07.072 "trsvcid": "4420" 00:10:07.072 }, 00:10:07.072 "peer_address": { 00:10:07.072 "trtype": "TCP", 00:10:07.072 "adrfam": "IPv4", 00:10:07.072 "traddr": "10.0.0.1", 00:10:07.072 "trsvcid": "44652" 00:10:07.072 }, 00:10:07.072 "auth": { 00:10:07.072 "state": "completed", 00:10:07.072 "digest": "sha256", 00:10:07.072 "dhgroup": "ffdhe6144" 00:10:07.072 } 00:10:07.072 } 00:10:07.072 ]' 00:10:07.072 20:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:07.332 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:07.332 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:07.332 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:07.332 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:07.332 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:07.332 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:07.332 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.591 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:10:08.176 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.176 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:08.176 20:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.176 20:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.176 20:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.176 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:08.176 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:08.176 20:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:08.176 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:08.176 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:08.176 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:08.176 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:08.176 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:08.176 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:08.176 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:08.176 20:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.176 20:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.435 20:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.435 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:08.435 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:08.695 00:10:08.695 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:08.695 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:08.695 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:08.966 { 00:10:08.966 "cntlid": 35, 00:10:08.966 "qid": 0, 00:10:08.966 "state": "enabled", 00:10:08.966 "thread": "nvmf_tgt_poll_group_000", 00:10:08.966 "listen_address": { 00:10:08.966 "trtype": "TCP", 00:10:08.966 "adrfam": "IPv4", 00:10:08.966 "traddr": "10.0.0.2", 00:10:08.966 "trsvcid": "4420" 00:10:08.966 }, 00:10:08.966 "peer_address": { 00:10:08.966 "trtype": "TCP", 00:10:08.966 "adrfam": "IPv4", 00:10:08.966 "traddr": "10.0.0.1", 00:10:08.966 "trsvcid": "44676" 00:10:08.966 }, 00:10:08.966 "auth": { 00:10:08.966 "state": "completed", 00:10:08.966 "digest": "sha256", 00:10:08.966 "dhgroup": "ffdhe6144" 00:10:08.966 } 00:10:08.966 } 00:10:08.966 ]' 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.966 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:09.225 20:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:10:09.791 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.791 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:09.791 20:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.791 20:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.791 20:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.791 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:09.791 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:09.791 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:10.050 20:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:10.308 00:10:10.308 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:10.308 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:10.308 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:10.567 { 00:10:10.567 "cntlid": 37, 00:10:10.567 "qid": 0, 00:10:10.567 "state": "enabled", 00:10:10.567 "thread": "nvmf_tgt_poll_group_000", 00:10:10.567 "listen_address": { 00:10:10.567 "trtype": "TCP", 00:10:10.567 "adrfam": "IPv4", 00:10:10.567 "traddr": "10.0.0.2", 00:10:10.567 "trsvcid": "4420" 00:10:10.567 }, 00:10:10.567 "peer_address": { 00:10:10.567 "trtype": "TCP", 00:10:10.567 "adrfam": "IPv4", 00:10:10.567 "traddr": "10.0.0.1", 00:10:10.567 "trsvcid": "44706" 00:10:10.567 }, 00:10:10.567 "auth": { 00:10:10.567 "state": "completed", 00:10:10.567 "digest": "sha256", 00:10:10.567 "dhgroup": "ffdhe6144" 00:10:10.567 } 00:10:10.567 } 00:10:10.567 ]' 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.567 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.826 20:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:10:11.392 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.392 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:11.392 20:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.392 20:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.392 20:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.392 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:11.392 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:11.392 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:11.682 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:11.970 00:10:11.970 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:11.970 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.970 20:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:12.229 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.229 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.229 20:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.229 20:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.229 20:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.229 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:12.229 { 00:10:12.229 "cntlid": 39, 00:10:12.229 "qid": 0, 00:10:12.229 "state": "enabled", 00:10:12.229 "thread": "nvmf_tgt_poll_group_000", 00:10:12.229 "listen_address": { 00:10:12.229 "trtype": "TCP", 00:10:12.229 "adrfam": "IPv4", 00:10:12.229 "traddr": "10.0.0.2", 00:10:12.229 "trsvcid": "4420" 00:10:12.229 }, 00:10:12.229 "peer_address": { 00:10:12.229 "trtype": "TCP", 00:10:12.229 "adrfam": "IPv4", 00:10:12.229 "traddr": "10.0.0.1", 00:10:12.229 "trsvcid": "44722" 00:10:12.229 }, 00:10:12.229 "auth": { 00:10:12.229 "state": "completed", 00:10:12.229 "digest": "sha256", 00:10:12.229 "dhgroup": "ffdhe6144" 00:10:12.229 } 00:10:12.229 } 00:10:12.229 ]' 00:10:12.229 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:12.229 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.229 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:12.229 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:12.229 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:12.487 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.487 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.487 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.487 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:10:13.053 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.053 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:13.053 20:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.053 20:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.053 20:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.053 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:13.053 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:13.053 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:13.053 20:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:13.310 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:13.875 00:10:13.875 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:13.875 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.875 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:14.134 { 00:10:14.134 "cntlid": 41, 00:10:14.134 "qid": 0, 00:10:14.134 "state": "enabled", 00:10:14.134 "thread": "nvmf_tgt_poll_group_000", 00:10:14.134 "listen_address": { 00:10:14.134 "trtype": "TCP", 00:10:14.134 "adrfam": "IPv4", 00:10:14.134 "traddr": "10.0.0.2", 00:10:14.134 "trsvcid": "4420" 00:10:14.134 }, 00:10:14.134 "peer_address": { 00:10:14.134 "trtype": "TCP", 00:10:14.134 "adrfam": "IPv4", 00:10:14.134 "traddr": "10.0.0.1", 00:10:14.134 "trsvcid": "44756" 00:10:14.134 }, 00:10:14.134 "auth": { 00:10:14.134 "state": "completed", 00:10:14.134 "digest": "sha256", 00:10:14.134 "dhgroup": "ffdhe8192" 00:10:14.134 } 00:10:14.134 } 00:10:14.134 ]' 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.134 20:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.391 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:10:14.957 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.957 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:14.957 20:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.957 20:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.957 20:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.957 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:14.957 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:14.957 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.215 20:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.782 00:10:15.782 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:15.782 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.782 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:15.782 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.782 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.782 20:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.782 20:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.782 20:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.782 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:15.782 { 00:10:15.782 "cntlid": 43, 00:10:15.782 "qid": 0, 00:10:15.782 "state": "enabled", 00:10:15.782 "thread": "nvmf_tgt_poll_group_000", 00:10:15.782 "listen_address": { 00:10:15.782 "trtype": "TCP", 00:10:15.782 "adrfam": "IPv4", 00:10:15.782 "traddr": "10.0.0.2", 00:10:15.782 "trsvcid": "4420" 00:10:15.782 }, 00:10:15.782 "peer_address": { 00:10:15.782 "trtype": "TCP", 00:10:15.782 "adrfam": "IPv4", 00:10:15.782 "traddr": "10.0.0.1", 00:10:15.782 "trsvcid": "39644" 00:10:15.782 }, 00:10:15.782 "auth": { 00:10:15.782 "state": "completed", 00:10:15.782 "digest": "sha256", 00:10:15.782 "dhgroup": "ffdhe8192" 00:10:15.782 } 00:10:15.782 } 00:10:15.782 ]' 00:10:15.782 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:16.041 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.041 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:16.041 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:16.041 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:16.041 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.041 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.041 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.041 20:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:10:16.608 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.608 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:16.608 20:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.608 20:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.608 20:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.608 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:16.608 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:16.608 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.867 20:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.435 00:10:17.435 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:17.435 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:17.435 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.693 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.693 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.693 20:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.693 20:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.694 20:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.694 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:17.694 { 00:10:17.694 "cntlid": 45, 00:10:17.694 "qid": 0, 00:10:17.694 "state": "enabled", 00:10:17.694 "thread": "nvmf_tgt_poll_group_000", 00:10:17.694 "listen_address": { 00:10:17.694 "trtype": "TCP", 00:10:17.694 "adrfam": "IPv4", 00:10:17.694 "traddr": "10.0.0.2", 00:10:17.694 "trsvcid": "4420" 00:10:17.694 }, 00:10:17.694 "peer_address": { 00:10:17.694 "trtype": "TCP", 00:10:17.694 "adrfam": "IPv4", 00:10:17.694 "traddr": "10.0.0.1", 00:10:17.694 "trsvcid": "39680" 00:10:17.694 }, 00:10:17.694 "auth": { 00:10:17.694 "state": "completed", 00:10:17.694 "digest": "sha256", 00:10:17.694 "dhgroup": "ffdhe8192" 00:10:17.694 } 00:10:17.694 } 00:10:17.694 ]' 00:10:17.694 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:17.694 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:17.694 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:17.694 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:17.694 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:17.694 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.694 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.694 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.952 20:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:10:18.520 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.520 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:18.520 20:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.520 20:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.520 20:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.520 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:18.520 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:18.520 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:18.779 20:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:19.346 00:10:19.346 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:19.346 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:19.346 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.604 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.604 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.604 20:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.604 20:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.604 20:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.604 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:19.604 { 00:10:19.604 "cntlid": 47, 00:10:19.604 "qid": 0, 00:10:19.604 "state": "enabled", 00:10:19.604 "thread": "nvmf_tgt_poll_group_000", 00:10:19.604 "listen_address": { 00:10:19.604 "trtype": "TCP", 00:10:19.604 "adrfam": "IPv4", 00:10:19.604 "traddr": "10.0.0.2", 00:10:19.604 "trsvcid": "4420" 00:10:19.604 }, 00:10:19.604 "peer_address": { 00:10:19.604 "trtype": "TCP", 00:10:19.604 "adrfam": "IPv4", 00:10:19.604 "traddr": "10.0.0.1", 00:10:19.604 "trsvcid": "39706" 00:10:19.604 }, 00:10:19.604 "auth": { 00:10:19.604 "state": "completed", 00:10:19.604 "digest": "sha256", 00:10:19.604 "dhgroup": "ffdhe8192" 00:10:19.604 } 00:10:19.605 } 00:10:19.605 ]' 00:10:19.605 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:19.605 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:19.605 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:19.605 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:19.605 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:19.605 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.605 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.605 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.871 20:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:10:20.439 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.439 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:20.439 20:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.439 20:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.439 20:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.439 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:20.439 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:20.439 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:20.439 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:20.439 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:20.697 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:10:20.697 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:20.697 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:20.697 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:20.697 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:20.697 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.697 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.697 20:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.697 20:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.697 20:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.697 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.698 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.967 00:10:20.967 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:20.967 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.967 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:21.227 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.227 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.227 20:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.227 20:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.227 20:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.227 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:21.227 { 00:10:21.227 "cntlid": 49, 00:10:21.227 "qid": 0, 00:10:21.227 "state": "enabled", 00:10:21.227 "thread": "nvmf_tgt_poll_group_000", 00:10:21.227 "listen_address": { 00:10:21.227 "trtype": "TCP", 00:10:21.227 "adrfam": "IPv4", 00:10:21.227 "traddr": "10.0.0.2", 00:10:21.227 "trsvcid": "4420" 00:10:21.227 }, 00:10:21.227 "peer_address": { 00:10:21.227 "trtype": "TCP", 00:10:21.227 "adrfam": "IPv4", 00:10:21.227 "traddr": "10.0.0.1", 00:10:21.227 "trsvcid": "39728" 00:10:21.227 }, 00:10:21.227 "auth": { 00:10:21.227 "state": "completed", 00:10:21.227 "digest": "sha384", 00:10:21.227 "dhgroup": "null" 00:10:21.227 } 00:10:21.227 } 00:10:21.227 ]' 00:10:21.227 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:21.227 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:21.227 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:21.227 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:21.227 20:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:21.227 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.227 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.227 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.487 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:10:22.060 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.060 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:22.060 20:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.060 20:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.060 20:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.060 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:22.060 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:22.060 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:22.318 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:10:22.318 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:22.318 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:22.318 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:22.318 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:22.318 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.319 20:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.319 20:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.319 20:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.319 20:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.319 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.319 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.577 00:10:22.577 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:22.577 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:22.577 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.577 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.577 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.577 20:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.577 20:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.577 20:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.577 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:22.577 { 00:10:22.577 "cntlid": 51, 00:10:22.577 "qid": 0, 00:10:22.577 "state": "enabled", 00:10:22.577 "thread": "nvmf_tgt_poll_group_000", 00:10:22.577 "listen_address": { 00:10:22.577 "trtype": "TCP", 00:10:22.577 "adrfam": "IPv4", 00:10:22.577 "traddr": "10.0.0.2", 00:10:22.577 "trsvcid": "4420" 00:10:22.577 }, 00:10:22.577 "peer_address": { 00:10:22.577 "trtype": "TCP", 00:10:22.577 "adrfam": "IPv4", 00:10:22.577 "traddr": "10.0.0.1", 00:10:22.577 "trsvcid": "39758" 00:10:22.577 }, 00:10:22.577 "auth": { 00:10:22.577 "state": "completed", 00:10:22.577 "digest": "sha384", 00:10:22.577 "dhgroup": "null" 00:10:22.577 } 00:10:22.577 } 00:10:22.577 ]' 00:10:22.577 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:22.836 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:22.836 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:22.836 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:22.836 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:22.836 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.836 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.836 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.095 20:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.662 20:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.921 20:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.921 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.921 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.921 00:10:24.181 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:24.181 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.181 20:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:24.181 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.181 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.181 20:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.181 20:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.181 20:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.181 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:24.181 { 00:10:24.181 "cntlid": 53, 00:10:24.181 "qid": 0, 00:10:24.181 "state": "enabled", 00:10:24.181 "thread": "nvmf_tgt_poll_group_000", 00:10:24.181 "listen_address": { 00:10:24.181 "trtype": "TCP", 00:10:24.181 "adrfam": "IPv4", 00:10:24.181 "traddr": "10.0.0.2", 00:10:24.181 "trsvcid": "4420" 00:10:24.181 }, 00:10:24.181 "peer_address": { 00:10:24.181 "trtype": "TCP", 00:10:24.181 "adrfam": "IPv4", 00:10:24.181 "traddr": "10.0.0.1", 00:10:24.181 "trsvcid": "39784" 00:10:24.181 }, 00:10:24.181 "auth": { 00:10:24.181 "state": "completed", 00:10:24.181 "digest": "sha384", 00:10:24.181 "dhgroup": "null" 00:10:24.181 } 00:10:24.181 } 00:10:24.181 ]' 00:10:24.181 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:24.440 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:24.440 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:24.440 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:24.440 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:24.440 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.440 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.440 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.699 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:10:25.268 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.269 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:25.269 20:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.269 20:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.269 20:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.269 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:25.269 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:25.269 20:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:25.269 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:25.528 00:10:25.528 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:25.528 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.528 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:25.787 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.787 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.787 20:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.787 20:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.788 20:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.788 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:25.788 { 00:10:25.788 "cntlid": 55, 00:10:25.788 "qid": 0, 00:10:25.788 "state": "enabled", 00:10:25.788 "thread": "nvmf_tgt_poll_group_000", 00:10:25.788 "listen_address": { 00:10:25.788 "trtype": "TCP", 00:10:25.788 "adrfam": "IPv4", 00:10:25.788 "traddr": "10.0.0.2", 00:10:25.788 "trsvcid": "4420" 00:10:25.788 }, 00:10:25.788 "peer_address": { 00:10:25.788 "trtype": "TCP", 00:10:25.788 "adrfam": "IPv4", 00:10:25.788 "traddr": "10.0.0.1", 00:10:25.788 "trsvcid": "43142" 00:10:25.788 }, 00:10:25.788 "auth": { 00:10:25.788 "state": "completed", 00:10:25.788 "digest": "sha384", 00:10:25.788 "dhgroup": "null" 00:10:25.788 } 00:10:25.788 } 00:10:25.788 ]' 00:10:25.788 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:25.788 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:25.788 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:26.103 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:26.103 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:26.103 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.103 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.103 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.103 20:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:10:26.674 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.674 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:26.674 20:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.674 20:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.674 20:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.674 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:26.674 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:26.674 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:26.674 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.933 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.192 00:10:27.192 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:27.192 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:27.192 20:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:27.451 { 00:10:27.451 "cntlid": 57, 00:10:27.451 "qid": 0, 00:10:27.451 "state": "enabled", 00:10:27.451 "thread": "nvmf_tgt_poll_group_000", 00:10:27.451 "listen_address": { 00:10:27.451 "trtype": "TCP", 00:10:27.451 "adrfam": "IPv4", 00:10:27.451 "traddr": "10.0.0.2", 00:10:27.451 "trsvcid": "4420" 00:10:27.451 }, 00:10:27.451 "peer_address": { 00:10:27.451 "trtype": "TCP", 00:10:27.451 "adrfam": "IPv4", 00:10:27.451 "traddr": "10.0.0.1", 00:10:27.451 "trsvcid": "43176" 00:10:27.451 }, 00:10:27.451 "auth": { 00:10:27.451 "state": "completed", 00:10:27.451 "digest": "sha384", 00:10:27.451 "dhgroup": "ffdhe2048" 00:10:27.451 } 00:10:27.451 } 00:10:27.451 ]' 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.451 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.710 20:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:10:28.277 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.277 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:28.277 20:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.277 20:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.277 20:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.277 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:28.277 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:28.277 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.536 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.794 00:10:28.794 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:28.794 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:28.795 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.053 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.053 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.053 20:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.053 20:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.053 20:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.053 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:29.053 { 00:10:29.053 "cntlid": 59, 00:10:29.053 "qid": 0, 00:10:29.053 "state": "enabled", 00:10:29.053 "thread": "nvmf_tgt_poll_group_000", 00:10:29.053 "listen_address": { 00:10:29.053 "trtype": "TCP", 00:10:29.053 "adrfam": "IPv4", 00:10:29.053 "traddr": "10.0.0.2", 00:10:29.053 "trsvcid": "4420" 00:10:29.053 }, 00:10:29.053 "peer_address": { 00:10:29.053 "trtype": "TCP", 00:10:29.053 "adrfam": "IPv4", 00:10:29.053 "traddr": "10.0.0.1", 00:10:29.054 "trsvcid": "43202" 00:10:29.054 }, 00:10:29.054 "auth": { 00:10:29.054 "state": "completed", 00:10:29.054 "digest": "sha384", 00:10:29.054 "dhgroup": "ffdhe2048" 00:10:29.054 } 00:10:29.054 } 00:10:29.054 ]' 00:10:29.054 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:29.054 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:29.054 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:29.054 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:29.054 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:29.054 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.054 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.054 20:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.311 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:10:29.930 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.930 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:29.930 20:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.930 20:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.930 20:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.930 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:29.930 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:29.930 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.189 20:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.446 00:10:30.446 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:30.446 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.446 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.704 { 00:10:30.704 "cntlid": 61, 00:10:30.704 "qid": 0, 00:10:30.704 "state": "enabled", 00:10:30.704 "thread": "nvmf_tgt_poll_group_000", 00:10:30.704 "listen_address": { 00:10:30.704 "trtype": "TCP", 00:10:30.704 "adrfam": "IPv4", 00:10:30.704 "traddr": "10.0.0.2", 00:10:30.704 "trsvcid": "4420" 00:10:30.704 }, 00:10:30.704 "peer_address": { 00:10:30.704 "trtype": "TCP", 00:10:30.704 "adrfam": "IPv4", 00:10:30.704 "traddr": "10.0.0.1", 00:10:30.704 "trsvcid": "43238" 00:10:30.704 }, 00:10:30.704 "auth": { 00:10:30.704 "state": "completed", 00:10:30.704 "digest": "sha384", 00:10:30.704 "dhgroup": "ffdhe2048" 00:10:30.704 } 00:10:30.704 } 00:10:30.704 ]' 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.704 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.962 20:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:10:31.528 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.528 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:31.528 20:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.528 20:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.528 20:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.528 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:31.528 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:31.528 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:31.785 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:32.043 00:10:32.043 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:32.043 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:32.043 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.043 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.043 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.043 20:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.043 20:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.302 20:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.302 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.302 { 00:10:32.302 "cntlid": 63, 00:10:32.302 "qid": 0, 00:10:32.302 "state": "enabled", 00:10:32.302 "thread": "nvmf_tgt_poll_group_000", 00:10:32.302 "listen_address": { 00:10:32.302 "trtype": "TCP", 00:10:32.302 "adrfam": "IPv4", 00:10:32.302 "traddr": "10.0.0.2", 00:10:32.302 "trsvcid": "4420" 00:10:32.302 }, 00:10:32.302 "peer_address": { 00:10:32.302 "trtype": "TCP", 00:10:32.302 "adrfam": "IPv4", 00:10:32.302 "traddr": "10.0.0.1", 00:10:32.302 "trsvcid": "43266" 00:10:32.302 }, 00:10:32.302 "auth": { 00:10:32.302 "state": "completed", 00:10:32.302 "digest": "sha384", 00:10:32.302 "dhgroup": "ffdhe2048" 00:10:32.302 } 00:10:32.302 } 00:10:32.302 ]' 00:10:32.302 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.302 20:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:32.302 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.302 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:32.302 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.302 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.302 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.302 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.560 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:10:33.126 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.127 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:33.127 20:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.127 20:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.127 20:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.127 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:33.127 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:33.127 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:33.127 20:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:33.385 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:33.644 00:10:33.644 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:33.644 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:33.644 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.644 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.644 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.644 20:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.644 20:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.644 20:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.644 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:33.644 { 00:10:33.644 "cntlid": 65, 00:10:33.644 "qid": 0, 00:10:33.644 "state": "enabled", 00:10:33.644 "thread": "nvmf_tgt_poll_group_000", 00:10:33.644 "listen_address": { 00:10:33.644 "trtype": "TCP", 00:10:33.644 "adrfam": "IPv4", 00:10:33.644 "traddr": "10.0.0.2", 00:10:33.644 "trsvcid": "4420" 00:10:33.644 }, 00:10:33.644 "peer_address": { 00:10:33.644 "trtype": "TCP", 00:10:33.644 "adrfam": "IPv4", 00:10:33.645 "traddr": "10.0.0.1", 00:10:33.645 "trsvcid": "43298" 00:10:33.645 }, 00:10:33.645 "auth": { 00:10:33.645 "state": "completed", 00:10:33.645 "digest": "sha384", 00:10:33.645 "dhgroup": "ffdhe3072" 00:10:33.645 } 00:10:33.645 } 00:10:33.645 ]' 00:10:33.645 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:33.924 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:33.924 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:33.924 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:33.924 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:33.924 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.924 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.924 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.183 20:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.751 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.010 00:10:35.010 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:35.010 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:35.010 20:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.268 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.268 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.268 20:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.268 20:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.268 20:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.268 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:35.268 { 00:10:35.268 "cntlid": 67, 00:10:35.268 "qid": 0, 00:10:35.268 "state": "enabled", 00:10:35.268 "thread": "nvmf_tgt_poll_group_000", 00:10:35.268 "listen_address": { 00:10:35.268 "trtype": "TCP", 00:10:35.268 "adrfam": "IPv4", 00:10:35.268 "traddr": "10.0.0.2", 00:10:35.268 "trsvcid": "4420" 00:10:35.268 }, 00:10:35.268 "peer_address": { 00:10:35.268 "trtype": "TCP", 00:10:35.268 "adrfam": "IPv4", 00:10:35.268 "traddr": "10.0.0.1", 00:10:35.268 "trsvcid": "43342" 00:10:35.268 }, 00:10:35.268 "auth": { 00:10:35.268 "state": "completed", 00:10:35.268 "digest": "sha384", 00:10:35.268 "dhgroup": "ffdhe3072" 00:10:35.268 } 00:10:35.268 } 00:10:35.268 ]' 00:10:35.268 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:35.268 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:35.268 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.527 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:35.527 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.527 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.527 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.527 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.785 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:10:36.353 20:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.353 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.612 00:10:36.612 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.612 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.612 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.871 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.871 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.871 20:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.871 20:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.871 20:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.871 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:36.871 { 00:10:36.871 "cntlid": 69, 00:10:36.871 "qid": 0, 00:10:36.871 "state": "enabled", 00:10:36.871 "thread": "nvmf_tgt_poll_group_000", 00:10:36.871 "listen_address": { 00:10:36.871 "trtype": "TCP", 00:10:36.871 "adrfam": "IPv4", 00:10:36.871 "traddr": "10.0.0.2", 00:10:36.871 "trsvcid": "4420" 00:10:36.871 }, 00:10:36.871 "peer_address": { 00:10:36.871 "trtype": "TCP", 00:10:36.871 "adrfam": "IPv4", 00:10:36.871 "traddr": "10.0.0.1", 00:10:36.871 "trsvcid": "54596" 00:10:36.871 }, 00:10:36.871 "auth": { 00:10:36.871 "state": "completed", 00:10:36.871 "digest": "sha384", 00:10:36.871 "dhgroup": "ffdhe3072" 00:10:36.871 } 00:10:36.871 } 00:10:36.871 ]' 00:10:36.871 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:36.871 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:36.871 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:36.871 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:36.871 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:37.131 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.131 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.131 20:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.131 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:10:38.068 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.068 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:38.068 20:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.068 20:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.068 20:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.068 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:38.068 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.069 20:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.636 00:10:38.636 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:38.636 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:38.636 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.636 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.636 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.636 20:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.636 20:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.636 20:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.636 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:38.636 { 00:10:38.636 "cntlid": 71, 00:10:38.636 "qid": 0, 00:10:38.636 "state": "enabled", 00:10:38.636 "thread": "nvmf_tgt_poll_group_000", 00:10:38.636 "listen_address": { 00:10:38.636 "trtype": "TCP", 00:10:38.636 "adrfam": "IPv4", 00:10:38.636 "traddr": "10.0.0.2", 00:10:38.636 "trsvcid": "4420" 00:10:38.636 }, 00:10:38.636 "peer_address": { 00:10:38.636 "trtype": "TCP", 00:10:38.636 "adrfam": "IPv4", 00:10:38.636 "traddr": "10.0.0.1", 00:10:38.636 "trsvcid": "54622" 00:10:38.636 }, 00:10:38.636 "auth": { 00:10:38.636 "state": "completed", 00:10:38.636 "digest": "sha384", 00:10:38.636 "dhgroup": "ffdhe3072" 00:10:38.636 } 00:10:38.636 } 00:10:38.636 ]' 00:10:38.636 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:38.894 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:38.894 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:38.894 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:38.894 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:38.894 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.894 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.894 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.152 20:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.720 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.041 00:10:40.300 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:40.300 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.300 20:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:40.300 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.300 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.300 20:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.300 20:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.300 20:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.300 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:40.300 { 00:10:40.300 "cntlid": 73, 00:10:40.300 "qid": 0, 00:10:40.300 "state": "enabled", 00:10:40.300 "thread": "nvmf_tgt_poll_group_000", 00:10:40.300 "listen_address": { 00:10:40.300 "trtype": "TCP", 00:10:40.300 "adrfam": "IPv4", 00:10:40.300 "traddr": "10.0.0.2", 00:10:40.300 "trsvcid": "4420" 00:10:40.300 }, 00:10:40.300 "peer_address": { 00:10:40.300 "trtype": "TCP", 00:10:40.300 "adrfam": "IPv4", 00:10:40.300 "traddr": "10.0.0.1", 00:10:40.300 "trsvcid": "54644" 00:10:40.300 }, 00:10:40.300 "auth": { 00:10:40.300 "state": "completed", 00:10:40.300 "digest": "sha384", 00:10:40.300 "dhgroup": "ffdhe4096" 00:10:40.300 } 00:10:40.300 } 00:10:40.300 ]' 00:10:40.300 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.300 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:40.300 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.558 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:40.558 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.558 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.558 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.558 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.815 20:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.382 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.639 00:10:41.639 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:41.639 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.639 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:41.895 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.895 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.895 20:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.895 20:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.895 20:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.895 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:41.895 { 00:10:41.895 "cntlid": 75, 00:10:41.895 "qid": 0, 00:10:41.895 "state": "enabled", 00:10:41.895 "thread": "nvmf_tgt_poll_group_000", 00:10:41.895 "listen_address": { 00:10:41.895 "trtype": "TCP", 00:10:41.895 "adrfam": "IPv4", 00:10:41.895 "traddr": "10.0.0.2", 00:10:41.895 "trsvcid": "4420" 00:10:41.895 }, 00:10:41.895 "peer_address": { 00:10:41.895 "trtype": "TCP", 00:10:41.895 "adrfam": "IPv4", 00:10:41.895 "traddr": "10.0.0.1", 00:10:41.895 "trsvcid": "54656" 00:10:41.895 }, 00:10:41.895 "auth": { 00:10:41.895 "state": "completed", 00:10:41.895 "digest": "sha384", 00:10:41.895 "dhgroup": "ffdhe4096" 00:10:41.895 } 00:10:41.895 } 00:10:41.895 ]' 00:10:41.895 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:41.896 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:41.896 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:42.153 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:42.153 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:42.153 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.153 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.153 20:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.456 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:10:42.715 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.715 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:42.715 20:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.716 20:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.716 20:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.716 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:42.716 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:42.716 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.974 20:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.232 00:10:43.232 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:43.232 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:43.232 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.489 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.489 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.489 20:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.489 20:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.489 20:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.489 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:43.489 { 00:10:43.489 "cntlid": 77, 00:10:43.489 "qid": 0, 00:10:43.489 "state": "enabled", 00:10:43.489 "thread": "nvmf_tgt_poll_group_000", 00:10:43.489 "listen_address": { 00:10:43.489 "trtype": "TCP", 00:10:43.489 "adrfam": "IPv4", 00:10:43.489 "traddr": "10.0.0.2", 00:10:43.489 "trsvcid": "4420" 00:10:43.489 }, 00:10:43.489 "peer_address": { 00:10:43.489 "trtype": "TCP", 00:10:43.489 "adrfam": "IPv4", 00:10:43.489 "traddr": "10.0.0.1", 00:10:43.489 "trsvcid": "54668" 00:10:43.489 }, 00:10:43.489 "auth": { 00:10:43.489 "state": "completed", 00:10:43.489 "digest": "sha384", 00:10:43.489 "dhgroup": "ffdhe4096" 00:10:43.489 } 00:10:43.489 } 00:10:43.489 ]' 00:10:43.489 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:43.746 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:43.746 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:43.746 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:43.746 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:43.746 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.746 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.746 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.008 20:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:10:44.577 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.577 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:44.577 20:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.577 20:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.577 20:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.577 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.577 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:44.577 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:44.835 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:10:44.835 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.835 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:44.836 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:44.836 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:44.836 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.836 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:10:44.836 20:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.836 20:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.836 20:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.836 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:44.836 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:45.094 00:10:45.094 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:45.094 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.094 20:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:45.094 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:45.352 { 00:10:45.352 "cntlid": 79, 00:10:45.352 "qid": 0, 00:10:45.352 "state": "enabled", 00:10:45.352 "thread": "nvmf_tgt_poll_group_000", 00:10:45.352 "listen_address": { 00:10:45.352 "trtype": "TCP", 00:10:45.352 "adrfam": "IPv4", 00:10:45.352 "traddr": "10.0.0.2", 00:10:45.352 "trsvcid": "4420" 00:10:45.352 }, 00:10:45.352 "peer_address": { 00:10:45.352 "trtype": "TCP", 00:10:45.352 "adrfam": "IPv4", 00:10:45.352 "traddr": "10.0.0.1", 00:10:45.352 "trsvcid": "54686" 00:10:45.352 }, 00:10:45.352 "auth": { 00:10:45.352 "state": "completed", 00:10:45.352 "digest": "sha384", 00:10:45.352 "dhgroup": "ffdhe4096" 00:10:45.352 } 00:10:45.352 } 00:10:45.352 ]' 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.352 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.611 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:10:46.179 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.179 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:46.179 20:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.179 20:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.179 20:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.179 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:46.179 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:46.179 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:46.179 20:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:46.437 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:10:46.437 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:46.437 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:46.437 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:46.438 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:46.438 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.438 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.438 20:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.438 20:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.438 20:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.438 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.438 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.695 00:10:46.695 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.695 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.695 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:46.952 { 00:10:46.952 "cntlid": 81, 00:10:46.952 "qid": 0, 00:10:46.952 "state": "enabled", 00:10:46.952 "thread": "nvmf_tgt_poll_group_000", 00:10:46.952 "listen_address": { 00:10:46.952 "trtype": "TCP", 00:10:46.952 "adrfam": "IPv4", 00:10:46.952 "traddr": "10.0.0.2", 00:10:46.952 "trsvcid": "4420" 00:10:46.952 }, 00:10:46.952 "peer_address": { 00:10:46.952 "trtype": "TCP", 00:10:46.952 "adrfam": "IPv4", 00:10:46.952 "traddr": "10.0.0.1", 00:10:46.952 "trsvcid": "54204" 00:10:46.952 }, 00:10:46.952 "auth": { 00:10:46.952 "state": "completed", 00:10:46.952 "digest": "sha384", 00:10:46.952 "dhgroup": "ffdhe6144" 00:10:46.952 } 00:10:46.952 } 00:10:46.952 ]' 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.952 20:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.210 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:10:47.789 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.789 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:47.789 20:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.789 20:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.789 20:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.789 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.789 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:47.789 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.047 20:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.310 00:10:48.310 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:48.310 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:48.310 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:48.569 { 00:10:48.569 "cntlid": 83, 00:10:48.569 "qid": 0, 00:10:48.569 "state": "enabled", 00:10:48.569 "thread": "nvmf_tgt_poll_group_000", 00:10:48.569 "listen_address": { 00:10:48.569 "trtype": "TCP", 00:10:48.569 "adrfam": "IPv4", 00:10:48.569 "traddr": "10.0.0.2", 00:10:48.569 "trsvcid": "4420" 00:10:48.569 }, 00:10:48.569 "peer_address": { 00:10:48.569 "trtype": "TCP", 00:10:48.569 "adrfam": "IPv4", 00:10:48.569 "traddr": "10.0.0.1", 00:10:48.569 "trsvcid": "54228" 00:10:48.569 }, 00:10:48.569 "auth": { 00:10:48.569 "state": "completed", 00:10:48.569 "digest": "sha384", 00:10:48.569 "dhgroup": "ffdhe6144" 00:10:48.569 } 00:10:48.569 } 00:10:48.569 ]' 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.569 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.828 20:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:10:49.394 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.394 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:49.394 20:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.394 20:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.395 20:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.395 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:49.395 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:49.395 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.654 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.913 00:10:50.172 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:50.172 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:50.172 20:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.172 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.172 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.172 20:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.172 20:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.172 20:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.172 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:50.172 { 00:10:50.172 "cntlid": 85, 00:10:50.172 "qid": 0, 00:10:50.172 "state": "enabled", 00:10:50.172 "thread": "nvmf_tgt_poll_group_000", 00:10:50.172 "listen_address": { 00:10:50.172 "trtype": "TCP", 00:10:50.172 "adrfam": "IPv4", 00:10:50.172 "traddr": "10.0.0.2", 00:10:50.172 "trsvcid": "4420" 00:10:50.172 }, 00:10:50.172 "peer_address": { 00:10:50.172 "trtype": "TCP", 00:10:50.172 "adrfam": "IPv4", 00:10:50.172 "traddr": "10.0.0.1", 00:10:50.173 "trsvcid": "54264" 00:10:50.173 }, 00:10:50.173 "auth": { 00:10:50.173 "state": "completed", 00:10:50.173 "digest": "sha384", 00:10:50.173 "dhgroup": "ffdhe6144" 00:10:50.173 } 00:10:50.173 } 00:10:50.173 ]' 00:10:50.173 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:50.431 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:50.431 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:50.431 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:50.431 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:50.431 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.431 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.431 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.690 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:10:51.270 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.270 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:51.270 20:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.270 20:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.270 20:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.270 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.270 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:51.270 20:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.270 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.836 00:10:51.836 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.836 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.836 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.836 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.836 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.836 20:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.836 20:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.836 20:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.836 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.836 { 00:10:51.836 "cntlid": 87, 00:10:51.836 "qid": 0, 00:10:51.836 "state": "enabled", 00:10:51.836 "thread": "nvmf_tgt_poll_group_000", 00:10:51.836 "listen_address": { 00:10:51.836 "trtype": "TCP", 00:10:51.836 "adrfam": "IPv4", 00:10:51.837 "traddr": "10.0.0.2", 00:10:51.837 "trsvcid": "4420" 00:10:51.837 }, 00:10:51.837 "peer_address": { 00:10:51.837 "trtype": "TCP", 00:10:51.837 "adrfam": "IPv4", 00:10:51.837 "traddr": "10.0.0.1", 00:10:51.837 "trsvcid": "54286" 00:10:51.837 }, 00:10:51.837 "auth": { 00:10:51.837 "state": "completed", 00:10:51.837 "digest": "sha384", 00:10:51.837 "dhgroup": "ffdhe6144" 00:10:51.837 } 00:10:51.837 } 00:10:51.837 ]' 00:10:51.837 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.095 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:52.095 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.095 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:52.095 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.095 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.095 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.095 20:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.353 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.920 20:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.487 00:10:53.487 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:53.487 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:53.487 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.746 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.746 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.746 20:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.746 20:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.746 20:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.746 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.746 { 00:10:53.746 "cntlid": 89, 00:10:53.746 "qid": 0, 00:10:53.746 "state": "enabled", 00:10:53.746 "thread": "nvmf_tgt_poll_group_000", 00:10:53.746 "listen_address": { 00:10:53.746 "trtype": "TCP", 00:10:53.746 "adrfam": "IPv4", 00:10:53.746 "traddr": "10.0.0.2", 00:10:53.746 "trsvcid": "4420" 00:10:53.746 }, 00:10:53.746 "peer_address": { 00:10:53.747 "trtype": "TCP", 00:10:53.747 "adrfam": "IPv4", 00:10:53.747 "traddr": "10.0.0.1", 00:10:53.747 "trsvcid": "54324" 00:10:53.747 }, 00:10:53.747 "auth": { 00:10:53.747 "state": "completed", 00:10:53.747 "digest": "sha384", 00:10:53.747 "dhgroup": "ffdhe8192" 00:10:53.747 } 00:10:53.747 } 00:10:53.747 ]' 00:10:53.747 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:53.747 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:53.747 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:53.747 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:53.747 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:53.747 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.747 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.747 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.005 20:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:10:54.573 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.573 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:54.573 20:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.573 20:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.573 20:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.573 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:54.573 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:54.573 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.867 20:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.435 00:10:55.436 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.436 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.436 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:55.695 { 00:10:55.695 "cntlid": 91, 00:10:55.695 "qid": 0, 00:10:55.695 "state": "enabled", 00:10:55.695 "thread": "nvmf_tgt_poll_group_000", 00:10:55.695 "listen_address": { 00:10:55.695 "trtype": "TCP", 00:10:55.695 "adrfam": "IPv4", 00:10:55.695 "traddr": "10.0.0.2", 00:10:55.695 "trsvcid": "4420" 00:10:55.695 }, 00:10:55.695 "peer_address": { 00:10:55.695 "trtype": "TCP", 00:10:55.695 "adrfam": "IPv4", 00:10:55.695 "traddr": "10.0.0.1", 00:10:55.695 "trsvcid": "54346" 00:10:55.695 }, 00:10:55.695 "auth": { 00:10:55.695 "state": "completed", 00:10:55.695 "digest": "sha384", 00:10:55.695 "dhgroup": "ffdhe8192" 00:10:55.695 } 00:10:55.695 } 00:10:55.695 ]' 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.695 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.954 20:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:10:56.524 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.524 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:56.524 20:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.524 20:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.524 20:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.524 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.524 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:56.524 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.784 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.044 00:10:57.303 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:57.303 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:57.303 20:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.303 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.303 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.303 20:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.303 20:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.303 20:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.303 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.303 { 00:10:57.303 "cntlid": 93, 00:10:57.303 "qid": 0, 00:10:57.303 "state": "enabled", 00:10:57.303 "thread": "nvmf_tgt_poll_group_000", 00:10:57.303 "listen_address": { 00:10:57.303 "trtype": "TCP", 00:10:57.303 "adrfam": "IPv4", 00:10:57.303 "traddr": "10.0.0.2", 00:10:57.303 "trsvcid": "4420" 00:10:57.303 }, 00:10:57.303 "peer_address": { 00:10:57.303 "trtype": "TCP", 00:10:57.303 "adrfam": "IPv4", 00:10:57.303 "traddr": "10.0.0.1", 00:10:57.303 "trsvcid": "36046" 00:10:57.303 }, 00:10:57.303 "auth": { 00:10:57.303 "state": "completed", 00:10:57.303 "digest": "sha384", 00:10:57.303 "dhgroup": "ffdhe8192" 00:10:57.303 } 00:10:57.303 } 00:10:57.303 ]' 00:10:57.303 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.561 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.561 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.561 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:57.561 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.561 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.561 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.561 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.821 20:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:10:58.389 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.389 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:10:58.389 20:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.389 20:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.389 20:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.389 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.389 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:58.389 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:58.648 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:58.908 00:10:59.167 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:59.167 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.167 20:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:59.167 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.167 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.167 20:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.167 20:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.167 20:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.167 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:59.167 { 00:10:59.167 "cntlid": 95, 00:10:59.167 "qid": 0, 00:10:59.167 "state": "enabled", 00:10:59.167 "thread": "nvmf_tgt_poll_group_000", 00:10:59.167 "listen_address": { 00:10:59.167 "trtype": "TCP", 00:10:59.167 "adrfam": "IPv4", 00:10:59.167 "traddr": "10.0.0.2", 00:10:59.167 "trsvcid": "4420" 00:10:59.167 }, 00:10:59.167 "peer_address": { 00:10:59.167 "trtype": "TCP", 00:10:59.167 "adrfam": "IPv4", 00:10:59.167 "traddr": "10.0.0.1", 00:10:59.167 "trsvcid": "36076" 00:10:59.167 }, 00:10:59.167 "auth": { 00:10:59.167 "state": "completed", 00:10:59.167 "digest": "sha384", 00:10:59.167 "dhgroup": "ffdhe8192" 00:10:59.167 } 00:10:59.167 } 00:10:59.167 ]' 00:10:59.167 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.426 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:59.426 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.426 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:59.426 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.426 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.426 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.426 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.685 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:11:00.253 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.253 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:00.253 20:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.253 20:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.253 20:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.253 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:00.253 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:00.253 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.253 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:00.254 20:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.254 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.512 00:11:00.512 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.512 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.512 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.771 { 00:11:00.771 "cntlid": 97, 00:11:00.771 "qid": 0, 00:11:00.771 "state": "enabled", 00:11:00.771 "thread": "nvmf_tgt_poll_group_000", 00:11:00.771 "listen_address": { 00:11:00.771 "trtype": "TCP", 00:11:00.771 "adrfam": "IPv4", 00:11:00.771 "traddr": "10.0.0.2", 00:11:00.771 "trsvcid": "4420" 00:11:00.771 }, 00:11:00.771 "peer_address": { 00:11:00.771 "trtype": "TCP", 00:11:00.771 "adrfam": "IPv4", 00:11:00.771 "traddr": "10.0.0.1", 00:11:00.771 "trsvcid": "36088" 00:11:00.771 }, 00:11:00.771 "auth": { 00:11:00.771 "state": "completed", 00:11:00.771 "digest": "sha512", 00:11:00.771 "dhgroup": "null" 00:11:00.771 } 00:11:00.771 } 00:11:00.771 ]' 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.771 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.030 20:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:11:01.597 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.597 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:01.597 20:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.597 20:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.597 20:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.597 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.597 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:01.597 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.855 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.113 00:11:02.113 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.113 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.113 20:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:02.371 { 00:11:02.371 "cntlid": 99, 00:11:02.371 "qid": 0, 00:11:02.371 "state": "enabled", 00:11:02.371 "thread": "nvmf_tgt_poll_group_000", 00:11:02.371 "listen_address": { 00:11:02.371 "trtype": "TCP", 00:11:02.371 "adrfam": "IPv4", 00:11:02.371 "traddr": "10.0.0.2", 00:11:02.371 "trsvcid": "4420" 00:11:02.371 }, 00:11:02.371 "peer_address": { 00:11:02.371 "trtype": "TCP", 00:11:02.371 "adrfam": "IPv4", 00:11:02.371 "traddr": "10.0.0.1", 00:11:02.371 "trsvcid": "36114" 00:11:02.371 }, 00:11:02.371 "auth": { 00:11:02.371 "state": "completed", 00:11:02.371 "digest": "sha512", 00:11:02.371 "dhgroup": "null" 00:11:02.371 } 00:11:02.371 } 00:11:02.371 ]' 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.371 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.628 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:11:03.194 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.194 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:03.194 20:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.194 20:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.194 20:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.194 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.194 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:03.194 20:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.453 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.709 00:11:03.709 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.709 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.710 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.710 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.710 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.710 20:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.710 20:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.966 20:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.966 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.966 { 00:11:03.966 "cntlid": 101, 00:11:03.966 "qid": 0, 00:11:03.966 "state": "enabled", 00:11:03.966 "thread": "nvmf_tgt_poll_group_000", 00:11:03.966 "listen_address": { 00:11:03.966 "trtype": "TCP", 00:11:03.966 "adrfam": "IPv4", 00:11:03.966 "traddr": "10.0.0.2", 00:11:03.966 "trsvcid": "4420" 00:11:03.966 }, 00:11:03.966 "peer_address": { 00:11:03.966 "trtype": "TCP", 00:11:03.966 "adrfam": "IPv4", 00:11:03.966 "traddr": "10.0.0.1", 00:11:03.966 "trsvcid": "36132" 00:11:03.966 }, 00:11:03.966 "auth": { 00:11:03.966 "state": "completed", 00:11:03.966 "digest": "sha512", 00:11:03.966 "dhgroup": "null" 00:11:03.966 } 00:11:03.966 } 00:11:03.966 ]' 00:11:03.966 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.966 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:03.966 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.966 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:03.966 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.966 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.966 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.966 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.224 20:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:11:04.790 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.790 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:04.790 20:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.790 20:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.790 20:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.790 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.790 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:04.790 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:05.048 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:05.308 00:11:05.308 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.308 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.308 20:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.567 { 00:11:05.567 "cntlid": 103, 00:11:05.567 "qid": 0, 00:11:05.567 "state": "enabled", 00:11:05.567 "thread": "nvmf_tgt_poll_group_000", 00:11:05.567 "listen_address": { 00:11:05.567 "trtype": "TCP", 00:11:05.567 "adrfam": "IPv4", 00:11:05.567 "traddr": "10.0.0.2", 00:11:05.567 "trsvcid": "4420" 00:11:05.567 }, 00:11:05.567 "peer_address": { 00:11:05.567 "trtype": "TCP", 00:11:05.567 "adrfam": "IPv4", 00:11:05.567 "traddr": "10.0.0.1", 00:11:05.567 "trsvcid": "36144" 00:11:05.567 }, 00:11:05.567 "auth": { 00:11:05.567 "state": "completed", 00:11:05.567 "digest": "sha512", 00:11:05.567 "dhgroup": "null" 00:11:05.567 } 00:11:05.567 } 00:11:05.567 ]' 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.567 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.826 20:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.411 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.670 00:11:06.670 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.670 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.670 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.938 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.938 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.938 20:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.938 20:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.207 20:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.207 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.207 { 00:11:07.207 "cntlid": 105, 00:11:07.207 "qid": 0, 00:11:07.207 "state": "enabled", 00:11:07.207 "thread": "nvmf_tgt_poll_group_000", 00:11:07.207 "listen_address": { 00:11:07.207 "trtype": "TCP", 00:11:07.207 "adrfam": "IPv4", 00:11:07.207 "traddr": "10.0.0.2", 00:11:07.207 "trsvcid": "4420" 00:11:07.207 }, 00:11:07.207 "peer_address": { 00:11:07.207 "trtype": "TCP", 00:11:07.207 "adrfam": "IPv4", 00:11:07.207 "traddr": "10.0.0.1", 00:11:07.207 "trsvcid": "42654" 00:11:07.207 }, 00:11:07.207 "auth": { 00:11:07.207 "state": "completed", 00:11:07.207 "digest": "sha512", 00:11:07.207 "dhgroup": "ffdhe2048" 00:11:07.207 } 00:11:07.207 } 00:11:07.207 ]' 00:11:07.207 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.207 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:07.207 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.207 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:07.207 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.207 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.207 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.207 20:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.466 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.035 20:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.294 00:11:08.294 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.294 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.294 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.553 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.553 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.553 20:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.553 20:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.553 20:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.553 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.553 { 00:11:08.553 "cntlid": 107, 00:11:08.553 "qid": 0, 00:11:08.553 "state": "enabled", 00:11:08.553 "thread": "nvmf_tgt_poll_group_000", 00:11:08.553 "listen_address": { 00:11:08.553 "trtype": "TCP", 00:11:08.553 "adrfam": "IPv4", 00:11:08.553 "traddr": "10.0.0.2", 00:11:08.553 "trsvcid": "4420" 00:11:08.553 }, 00:11:08.553 "peer_address": { 00:11:08.553 "trtype": "TCP", 00:11:08.553 "adrfam": "IPv4", 00:11:08.553 "traddr": "10.0.0.1", 00:11:08.553 "trsvcid": "42688" 00:11:08.553 }, 00:11:08.553 "auth": { 00:11:08.553 "state": "completed", 00:11:08.553 "digest": "sha512", 00:11:08.553 "dhgroup": "ffdhe2048" 00:11:08.553 } 00:11:08.553 } 00:11:08.553 ]' 00:11:08.553 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:08.553 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:08.553 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:08.812 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:08.812 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.812 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.812 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.812 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.072 20:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.639 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.898 00:11:09.899 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.899 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.899 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.157 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.157 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.157 20:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.157 20:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.157 20:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.157 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.157 { 00:11:10.157 "cntlid": 109, 00:11:10.157 "qid": 0, 00:11:10.157 "state": "enabled", 00:11:10.157 "thread": "nvmf_tgt_poll_group_000", 00:11:10.157 "listen_address": { 00:11:10.157 "trtype": "TCP", 00:11:10.157 "adrfam": "IPv4", 00:11:10.157 "traddr": "10.0.0.2", 00:11:10.157 "trsvcid": "4420" 00:11:10.157 }, 00:11:10.157 "peer_address": { 00:11:10.157 "trtype": "TCP", 00:11:10.157 "adrfam": "IPv4", 00:11:10.157 "traddr": "10.0.0.1", 00:11:10.157 "trsvcid": "42716" 00:11:10.157 }, 00:11:10.157 "auth": { 00:11:10.157 "state": "completed", 00:11:10.157 "digest": "sha512", 00:11:10.157 "dhgroup": "ffdhe2048" 00:11:10.157 } 00:11:10.157 } 00:11:10.158 ]' 00:11:10.158 20:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:10.158 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:10.158 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:10.158 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:10.158 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:10.416 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.416 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.416 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.416 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:11:10.986 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.986 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:10.986 20:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.986 20:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.986 20:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.986 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.986 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:10.986 20:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:11.245 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:11.504 00:11:11.504 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.504 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.504 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.763 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.763 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.763 20:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.763 20:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.763 20:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.763 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.763 { 00:11:11.763 "cntlid": 111, 00:11:11.763 "qid": 0, 00:11:11.763 "state": "enabled", 00:11:11.763 "thread": "nvmf_tgt_poll_group_000", 00:11:11.763 "listen_address": { 00:11:11.763 "trtype": "TCP", 00:11:11.763 "adrfam": "IPv4", 00:11:11.763 "traddr": "10.0.0.2", 00:11:11.763 "trsvcid": "4420" 00:11:11.763 }, 00:11:11.763 "peer_address": { 00:11:11.763 "trtype": "TCP", 00:11:11.763 "adrfam": "IPv4", 00:11:11.763 "traddr": "10.0.0.1", 00:11:11.763 "trsvcid": "42748" 00:11:11.763 }, 00:11:11.763 "auth": { 00:11:11.763 "state": "completed", 00:11:11.763 "digest": "sha512", 00:11:11.763 "dhgroup": "ffdhe2048" 00:11:11.763 } 00:11:11.763 } 00:11:11.763 ]' 00:11:11.764 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.764 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:11.764 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.764 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:11.764 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.764 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.764 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.764 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.022 20:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:11:12.589 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.589 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:12.589 20:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.589 20:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.589 20:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.589 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:12.589 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.589 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:12.589 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.848 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.108 00:11:13.108 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.108 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.108 20:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.367 { 00:11:13.367 "cntlid": 113, 00:11:13.367 "qid": 0, 00:11:13.367 "state": "enabled", 00:11:13.367 "thread": "nvmf_tgt_poll_group_000", 00:11:13.367 "listen_address": { 00:11:13.367 "trtype": "TCP", 00:11:13.367 "adrfam": "IPv4", 00:11:13.367 "traddr": "10.0.0.2", 00:11:13.367 "trsvcid": "4420" 00:11:13.367 }, 00:11:13.367 "peer_address": { 00:11:13.367 "trtype": "TCP", 00:11:13.367 "adrfam": "IPv4", 00:11:13.367 "traddr": "10.0.0.1", 00:11:13.367 "trsvcid": "42780" 00:11:13.367 }, 00:11:13.367 "auth": { 00:11:13.367 "state": "completed", 00:11:13.367 "digest": "sha512", 00:11:13.367 "dhgroup": "ffdhe3072" 00:11:13.367 } 00:11:13.367 } 00:11:13.367 ]' 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.367 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.626 20:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:11:14.194 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.194 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:14.194 20:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.194 20:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.194 20:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.194 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.194 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:14.194 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.453 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.712 00:11:14.712 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.712 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.712 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:14.970 { 00:11:14.970 "cntlid": 115, 00:11:14.970 "qid": 0, 00:11:14.970 "state": "enabled", 00:11:14.970 "thread": "nvmf_tgt_poll_group_000", 00:11:14.970 "listen_address": { 00:11:14.970 "trtype": "TCP", 00:11:14.970 "adrfam": "IPv4", 00:11:14.970 "traddr": "10.0.0.2", 00:11:14.970 "trsvcid": "4420" 00:11:14.970 }, 00:11:14.970 "peer_address": { 00:11:14.970 "trtype": "TCP", 00:11:14.970 "adrfam": "IPv4", 00:11:14.970 "traddr": "10.0.0.1", 00:11:14.970 "trsvcid": "42808" 00:11:14.970 }, 00:11:14.970 "auth": { 00:11:14.970 "state": "completed", 00:11:14.970 "digest": "sha512", 00:11:14.970 "dhgroup": "ffdhe3072" 00:11:14.970 } 00:11:14.970 } 00:11:14.970 ]' 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.970 20:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.229 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:11:15.797 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.797 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:15.797 20:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.797 20:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.797 20:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.797 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.797 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:15.797 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.058 20:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.316 00:11:16.316 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:16.316 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:16.316 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.576 { 00:11:16.576 "cntlid": 117, 00:11:16.576 "qid": 0, 00:11:16.576 "state": "enabled", 00:11:16.576 "thread": "nvmf_tgt_poll_group_000", 00:11:16.576 "listen_address": { 00:11:16.576 "trtype": "TCP", 00:11:16.576 "adrfam": "IPv4", 00:11:16.576 "traddr": "10.0.0.2", 00:11:16.576 "trsvcid": "4420" 00:11:16.576 }, 00:11:16.576 "peer_address": { 00:11:16.576 "trtype": "TCP", 00:11:16.576 "adrfam": "IPv4", 00:11:16.576 "traddr": "10.0.0.1", 00:11:16.576 "trsvcid": "50304" 00:11:16.576 }, 00:11:16.576 "auth": { 00:11:16.576 "state": "completed", 00:11:16.576 "digest": "sha512", 00:11:16.576 "dhgroup": "ffdhe3072" 00:11:16.576 } 00:11:16.576 } 00:11:16.576 ]' 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.576 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.835 20:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:11:17.401 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.401 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:17.401 20:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.401 20:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.401 20:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.401 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.401 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:17.402 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.660 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.917 00:11:17.917 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.917 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.918 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.176 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.176 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.176 20:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.176 20:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.176 20:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.176 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:18.176 { 00:11:18.176 "cntlid": 119, 00:11:18.176 "qid": 0, 00:11:18.176 "state": "enabled", 00:11:18.176 "thread": "nvmf_tgt_poll_group_000", 00:11:18.176 "listen_address": { 00:11:18.176 "trtype": "TCP", 00:11:18.177 "adrfam": "IPv4", 00:11:18.177 "traddr": "10.0.0.2", 00:11:18.177 "trsvcid": "4420" 00:11:18.177 }, 00:11:18.177 "peer_address": { 00:11:18.177 "trtype": "TCP", 00:11:18.177 "adrfam": "IPv4", 00:11:18.177 "traddr": "10.0.0.1", 00:11:18.177 "trsvcid": "50336" 00:11:18.177 }, 00:11:18.177 "auth": { 00:11:18.177 "state": "completed", 00:11:18.177 "digest": "sha512", 00:11:18.177 "dhgroup": "ffdhe3072" 00:11:18.177 } 00:11:18.177 } 00:11:18.177 ]' 00:11:18.177 20:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:18.177 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:18.177 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.177 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:18.177 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.436 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.436 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.436 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.436 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:11:19.002 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.002 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:19.002 20:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.002 20:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.002 20:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.002 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:19.002 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:19.002 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:19.002 20:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:19.260 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:11:19.260 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:19.260 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:19.260 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:19.260 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:19.260 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.260 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.260 20:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.261 20:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.261 20:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.261 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.261 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.519 00:11:19.519 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.519 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.519 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.777 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.777 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.777 20:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.777 20:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.777 20:47:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.777 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.777 { 00:11:19.777 "cntlid": 121, 00:11:19.777 "qid": 0, 00:11:19.777 "state": "enabled", 00:11:19.777 "thread": "nvmf_tgt_poll_group_000", 00:11:19.777 "listen_address": { 00:11:19.777 "trtype": "TCP", 00:11:19.777 "adrfam": "IPv4", 00:11:19.777 "traddr": "10.0.0.2", 00:11:19.777 "trsvcid": "4420" 00:11:19.777 }, 00:11:19.777 "peer_address": { 00:11:19.777 "trtype": "TCP", 00:11:19.777 "adrfam": "IPv4", 00:11:19.777 "traddr": "10.0.0.1", 00:11:19.777 "trsvcid": "50366" 00:11:19.777 }, 00:11:19.777 "auth": { 00:11:19.777 "state": "completed", 00:11:19.777 "digest": "sha512", 00:11:19.777 "dhgroup": "ffdhe4096" 00:11:19.777 } 00:11:19.777 } 00:11:19.777 ]' 00:11:19.777 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.777 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:19.777 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.035 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:20.035 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.035 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.035 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.035 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.035 20:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:11:20.602 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.602 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:20.602 20:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.602 20:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.602 20:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.602 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.602 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:20.602 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.861 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.119 00:11:21.119 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.119 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.119 20:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.377 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.377 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.377 20:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.377 20:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.377 20:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.377 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.377 { 00:11:21.377 "cntlid": 123, 00:11:21.377 "qid": 0, 00:11:21.377 "state": "enabled", 00:11:21.377 "thread": "nvmf_tgt_poll_group_000", 00:11:21.377 "listen_address": { 00:11:21.377 "trtype": "TCP", 00:11:21.377 "adrfam": "IPv4", 00:11:21.377 "traddr": "10.0.0.2", 00:11:21.377 "trsvcid": "4420" 00:11:21.377 }, 00:11:21.377 "peer_address": { 00:11:21.377 "trtype": "TCP", 00:11:21.377 "adrfam": "IPv4", 00:11:21.377 "traddr": "10.0.0.1", 00:11:21.377 "trsvcid": "50384" 00:11:21.377 }, 00:11:21.377 "auth": { 00:11:21.377 "state": "completed", 00:11:21.377 "digest": "sha512", 00:11:21.377 "dhgroup": "ffdhe4096" 00:11:21.377 } 00:11:21.377 } 00:11:21.377 ]' 00:11:21.377 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.377 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:21.377 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.378 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:21.378 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.635 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.635 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.635 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.635 20:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.566 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.825 00:11:22.825 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:22.825 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.825 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.083 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.083 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.083 20:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.083 20:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.083 20:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.083 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.083 { 00:11:23.083 "cntlid": 125, 00:11:23.083 "qid": 0, 00:11:23.083 "state": "enabled", 00:11:23.083 "thread": "nvmf_tgt_poll_group_000", 00:11:23.083 "listen_address": { 00:11:23.083 "trtype": "TCP", 00:11:23.083 "adrfam": "IPv4", 00:11:23.083 "traddr": "10.0.0.2", 00:11:23.083 "trsvcid": "4420" 00:11:23.084 }, 00:11:23.084 "peer_address": { 00:11:23.084 "trtype": "TCP", 00:11:23.084 "adrfam": "IPv4", 00:11:23.084 "traddr": "10.0.0.1", 00:11:23.084 "trsvcid": "50408" 00:11:23.084 }, 00:11:23.084 "auth": { 00:11:23.084 "state": "completed", 00:11:23.084 "digest": "sha512", 00:11:23.084 "dhgroup": "ffdhe4096" 00:11:23.084 } 00:11:23.084 } 00:11:23.084 ]' 00:11:23.084 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.084 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:23.084 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.084 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:23.084 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.084 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.084 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.084 20:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.342 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:11:23.908 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.908 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:23.908 20:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.908 20:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.908 20:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.908 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.908 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:23.909 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:24.166 20:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:24.425 00:11:24.425 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:24.425 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:24.425 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.683 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.683 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.683 20:47:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.683 20:47:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.683 20:47:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.683 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.683 { 00:11:24.683 "cntlid": 127, 00:11:24.683 "qid": 0, 00:11:24.683 "state": "enabled", 00:11:24.683 "thread": "nvmf_tgt_poll_group_000", 00:11:24.683 "listen_address": { 00:11:24.683 "trtype": "TCP", 00:11:24.684 "adrfam": "IPv4", 00:11:24.684 "traddr": "10.0.0.2", 00:11:24.684 "trsvcid": "4420" 00:11:24.684 }, 00:11:24.684 "peer_address": { 00:11:24.684 "trtype": "TCP", 00:11:24.684 "adrfam": "IPv4", 00:11:24.684 "traddr": "10.0.0.1", 00:11:24.684 "trsvcid": "50442" 00:11:24.684 }, 00:11:24.684 "auth": { 00:11:24.684 "state": "completed", 00:11:24.684 "digest": "sha512", 00:11:24.684 "dhgroup": "ffdhe4096" 00:11:24.684 } 00:11:24.684 } 00:11:24.684 ]' 00:11:24.684 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.684 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:24.684 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.684 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:24.684 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.684 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.684 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.684 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.942 20:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:11:25.509 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.509 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:25.509 20:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.509 20:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.509 20:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.509 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:25.509 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.509 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:25.509 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:25.768 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:11:25.769 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.769 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:25.769 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:25.769 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:25.769 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.769 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.769 20:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.769 20:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.769 20:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.769 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.769 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.028 00:11:26.028 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:26.028 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.028 20:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.287 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.287 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.287 20:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.287 20:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.287 20:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.287 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.287 { 00:11:26.287 "cntlid": 129, 00:11:26.287 "qid": 0, 00:11:26.287 "state": "enabled", 00:11:26.287 "thread": "nvmf_tgt_poll_group_000", 00:11:26.287 "listen_address": { 00:11:26.287 "trtype": "TCP", 00:11:26.287 "adrfam": "IPv4", 00:11:26.287 "traddr": "10.0.0.2", 00:11:26.287 "trsvcid": "4420" 00:11:26.287 }, 00:11:26.287 "peer_address": { 00:11:26.287 "trtype": "TCP", 00:11:26.287 "adrfam": "IPv4", 00:11:26.287 "traddr": "10.0.0.1", 00:11:26.287 "trsvcid": "48372" 00:11:26.287 }, 00:11:26.287 "auth": { 00:11:26.287 "state": "completed", 00:11:26.287 "digest": "sha512", 00:11:26.287 "dhgroup": "ffdhe6144" 00:11:26.287 } 00:11:26.287 } 00:11:26.287 ]' 00:11:26.287 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.287 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:26.287 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.547 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:26.547 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.547 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.547 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.547 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.547 20:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:11:27.115 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.115 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:27.115 20:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.115 20:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.408 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.976 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.976 { 00:11:27.976 "cntlid": 131, 00:11:27.976 "qid": 0, 00:11:27.976 "state": "enabled", 00:11:27.976 "thread": "nvmf_tgt_poll_group_000", 00:11:27.976 "listen_address": { 00:11:27.976 "trtype": "TCP", 00:11:27.976 "adrfam": "IPv4", 00:11:27.976 "traddr": "10.0.0.2", 00:11:27.976 "trsvcid": "4420" 00:11:27.976 }, 00:11:27.976 "peer_address": { 00:11:27.976 "trtype": "TCP", 00:11:27.976 "adrfam": "IPv4", 00:11:27.976 "traddr": "10.0.0.1", 00:11:27.976 "trsvcid": "48394" 00:11:27.976 }, 00:11:27.976 "auth": { 00:11:27.976 "state": "completed", 00:11:27.976 "digest": "sha512", 00:11:27.976 "dhgroup": "ffdhe6144" 00:11:27.976 } 00:11:27.976 } 00:11:27.976 ]' 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:27.976 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.234 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:28.234 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.234 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.234 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.234 20:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.235 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.171 20:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.430 00:11:29.430 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.430 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.430 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.689 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.689 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.689 20:47:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.689 20:47:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.690 20:47:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.690 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.690 { 00:11:29.690 "cntlid": 133, 00:11:29.690 "qid": 0, 00:11:29.690 "state": "enabled", 00:11:29.690 "thread": "nvmf_tgt_poll_group_000", 00:11:29.690 "listen_address": { 00:11:29.690 "trtype": "TCP", 00:11:29.690 "adrfam": "IPv4", 00:11:29.690 "traddr": "10.0.0.2", 00:11:29.690 "trsvcid": "4420" 00:11:29.690 }, 00:11:29.690 "peer_address": { 00:11:29.690 "trtype": "TCP", 00:11:29.690 "adrfam": "IPv4", 00:11:29.690 "traddr": "10.0.0.1", 00:11:29.690 "trsvcid": "48420" 00:11:29.690 }, 00:11:29.690 "auth": { 00:11:29.690 "state": "completed", 00:11:29.690 "digest": "sha512", 00:11:29.690 "dhgroup": "ffdhe6144" 00:11:29.690 } 00:11:29.690 } 00:11:29.690 ]' 00:11:29.690 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.690 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:29.690 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.690 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:29.690 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.948 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.948 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.948 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.948 20:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:11:30.517 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.517 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:30.517 20:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.517 20:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.517 20:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.517 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.517 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:30.517 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:30.776 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:31.344 00:11:31.344 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.344 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.344 20:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.344 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.344 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.344 20:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.344 20:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.344 20:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.344 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.344 { 00:11:31.344 "cntlid": 135, 00:11:31.344 "qid": 0, 00:11:31.344 "state": "enabled", 00:11:31.344 "thread": "nvmf_tgt_poll_group_000", 00:11:31.344 "listen_address": { 00:11:31.344 "trtype": "TCP", 00:11:31.344 "adrfam": "IPv4", 00:11:31.344 "traddr": "10.0.0.2", 00:11:31.344 "trsvcid": "4420" 00:11:31.344 }, 00:11:31.344 "peer_address": { 00:11:31.344 "trtype": "TCP", 00:11:31.344 "adrfam": "IPv4", 00:11:31.344 "traddr": "10.0.0.1", 00:11:31.344 "trsvcid": "48446" 00:11:31.344 }, 00:11:31.344 "auth": { 00:11:31.344 "state": "completed", 00:11:31.344 "digest": "sha512", 00:11:31.344 "dhgroup": "ffdhe6144" 00:11:31.344 } 00:11:31.344 } 00:11:31.344 ]' 00:11:31.344 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.344 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:31.344 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.617 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:31.617 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.617 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.617 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.617 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.911 20:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:11:32.170 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.428 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.993 00:11:32.993 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.993 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.993 20:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.252 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.252 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.252 20:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.252 20:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.252 20:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.252 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.252 { 00:11:33.252 "cntlid": 137, 00:11:33.252 "qid": 0, 00:11:33.252 "state": "enabled", 00:11:33.252 "thread": "nvmf_tgt_poll_group_000", 00:11:33.252 "listen_address": { 00:11:33.252 "trtype": "TCP", 00:11:33.252 "adrfam": "IPv4", 00:11:33.252 "traddr": "10.0.0.2", 00:11:33.252 "trsvcid": "4420" 00:11:33.252 }, 00:11:33.252 "peer_address": { 00:11:33.252 "trtype": "TCP", 00:11:33.252 "adrfam": "IPv4", 00:11:33.252 "traddr": "10.0.0.1", 00:11:33.252 "trsvcid": "48464" 00:11:33.252 }, 00:11:33.252 "auth": { 00:11:33.252 "state": "completed", 00:11:33.252 "digest": "sha512", 00:11:33.252 "dhgroup": "ffdhe8192" 00:11:33.252 } 00:11:33.252 } 00:11:33.252 ]' 00:11:33.252 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.252 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:33.252 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.252 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:33.252 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.510 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.510 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.510 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.510 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:11:34.079 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.079 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:34.079 20:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.079 20:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.079 20:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.079 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.079 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:34.079 20:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.337 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.904 00:11:34.904 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.904 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.904 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.164 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.164 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.164 20:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.164 20:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.164 20:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.164 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.164 { 00:11:35.164 "cntlid": 139, 00:11:35.164 "qid": 0, 00:11:35.164 "state": "enabled", 00:11:35.164 "thread": "nvmf_tgt_poll_group_000", 00:11:35.164 "listen_address": { 00:11:35.164 "trtype": "TCP", 00:11:35.164 "adrfam": "IPv4", 00:11:35.164 "traddr": "10.0.0.2", 00:11:35.164 "trsvcid": "4420" 00:11:35.164 }, 00:11:35.164 "peer_address": { 00:11:35.164 "trtype": "TCP", 00:11:35.164 "adrfam": "IPv4", 00:11:35.164 "traddr": "10.0.0.1", 00:11:35.164 "trsvcid": "48488" 00:11:35.164 }, 00:11:35.164 "auth": { 00:11:35.164 "state": "completed", 00:11:35.164 "digest": "sha512", 00:11:35.164 "dhgroup": "ffdhe8192" 00:11:35.164 } 00:11:35.164 } 00:11:35.164 ]' 00:11:35.164 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.164 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:35.164 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.164 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:35.164 20:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.164 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.164 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.164 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.422 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:01:NmU0YjI2MTM1ZDYyZjM3MGNmNjk2MWU2ZjllMTJjMTizaPN6: --dhchap-ctrl-secret DHHC-1:02:N2IyMGZkNjM1YjVjNzdiNGYyN2JhNjBlMGRkYjBkOTAyZTJhOTY1YzgxZDU5N2FjoVg0Ww==: 00:11:35.989 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.989 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:35.989 20:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.989 20:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.989 20:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.989 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.989 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:35.989 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.248 20:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.816 00:11:36.816 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.816 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.816 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.816 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.816 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.816 20:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.816 20:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.816 20:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.816 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.816 { 00:11:36.816 "cntlid": 141, 00:11:36.816 "qid": 0, 00:11:36.816 "state": "enabled", 00:11:36.816 "thread": "nvmf_tgt_poll_group_000", 00:11:36.816 "listen_address": { 00:11:36.816 "trtype": "TCP", 00:11:36.816 "adrfam": "IPv4", 00:11:36.816 "traddr": "10.0.0.2", 00:11:36.816 "trsvcid": "4420" 00:11:36.816 }, 00:11:36.816 "peer_address": { 00:11:36.816 "trtype": "TCP", 00:11:36.816 "adrfam": "IPv4", 00:11:36.816 "traddr": "10.0.0.1", 00:11:36.816 "trsvcid": "34670" 00:11:36.816 }, 00:11:36.816 "auth": { 00:11:36.816 "state": "completed", 00:11:36.816 "digest": "sha512", 00:11:36.816 "dhgroup": "ffdhe8192" 00:11:36.816 } 00:11:36.816 } 00:11:36.816 ]' 00:11:36.816 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.074 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:37.074 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.074 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:37.074 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.074 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.074 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.074 20:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.333 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:02:ZWEwYzIzNTI3MTQ1ZmFhYmJjMTJmMGE4NmNhYmM4Y2MwMmM4NDJlNWU4OTVhMTY5QCr0tg==: --dhchap-ctrl-secret DHHC-1:01:ZDMyMTA4NDUxMTA3NTJlZGNjMzY2ZWMwNjJkY2JhZDHnhT5o: 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:37.900 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:37.901 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:37.901 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.901 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:11:37.901 20:47:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.901 20:47:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.159 20:47:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.159 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:38.159 20:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:38.417 00:11:38.676 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.676 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.676 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.676 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.676 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.676 20:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.676 20:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.676 20:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.676 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.676 { 00:11:38.676 "cntlid": 143, 00:11:38.676 "qid": 0, 00:11:38.676 "state": "enabled", 00:11:38.676 "thread": "nvmf_tgt_poll_group_000", 00:11:38.676 "listen_address": { 00:11:38.676 "trtype": "TCP", 00:11:38.676 "adrfam": "IPv4", 00:11:38.676 "traddr": "10.0.0.2", 00:11:38.676 "trsvcid": "4420" 00:11:38.676 }, 00:11:38.676 "peer_address": { 00:11:38.676 "trtype": "TCP", 00:11:38.676 "adrfam": "IPv4", 00:11:38.676 "traddr": "10.0.0.1", 00:11:38.676 "trsvcid": "34704" 00:11:38.676 }, 00:11:38.676 "auth": { 00:11:38.676 "state": "completed", 00:11:38.676 "digest": "sha512", 00:11:38.676 "dhgroup": "ffdhe8192" 00:11:38.676 } 00:11:38.676 } 00:11:38.676 ]' 00:11:38.676 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.934 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:38.934 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.934 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:38.934 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.934 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.934 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.934 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.193 20:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.762 20:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.329 00:11:40.329 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.329 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.329 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.587 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.587 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.587 20:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.587 20:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.587 20:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.587 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.587 { 00:11:40.587 "cntlid": 145, 00:11:40.587 "qid": 0, 00:11:40.587 "state": "enabled", 00:11:40.587 "thread": "nvmf_tgt_poll_group_000", 00:11:40.587 "listen_address": { 00:11:40.587 "trtype": "TCP", 00:11:40.587 "adrfam": "IPv4", 00:11:40.587 "traddr": "10.0.0.2", 00:11:40.587 "trsvcid": "4420" 00:11:40.587 }, 00:11:40.587 "peer_address": { 00:11:40.587 "trtype": "TCP", 00:11:40.587 "adrfam": "IPv4", 00:11:40.587 "traddr": "10.0.0.1", 00:11:40.587 "trsvcid": "34726" 00:11:40.587 }, 00:11:40.587 "auth": { 00:11:40.587 "state": "completed", 00:11:40.587 "digest": "sha512", 00:11:40.587 "dhgroup": "ffdhe8192" 00:11:40.587 } 00:11:40.587 } 00:11:40.587 ]' 00:11:40.587 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.587 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:40.587 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.587 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:40.587 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:40.845 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.845 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.845 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.845 20:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:00:YWU2YmVlYTY0N2U0N2NmMzcyNTgxY2FiYjdkY2QwMTA5ZGI1M2Y5YTkxM2JjMmRmnb4vRA==: --dhchap-ctrl-secret DHHC-1:03:NGM5MTliYzE5MjY0ZTg2NzkwMTEyOWQwMDBmN2QwMmMxOTEyNjIwMGVkN2FiZTk3NTliNTcyYjI5OWEzMGFjOZIWYBU=: 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:41.414 20:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:11:41.982 request: 00:11:41.982 { 00:11:41.982 "name": "nvme0", 00:11:41.982 "trtype": "tcp", 00:11:41.982 "traddr": "10.0.0.2", 00:11:41.982 "adrfam": "ipv4", 00:11:41.982 "trsvcid": "4420", 00:11:41.982 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:41.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e", 00:11:41.982 "prchk_reftag": false, 00:11:41.982 "prchk_guard": false, 00:11:41.982 "hdgst": false, 00:11:41.982 "ddgst": false, 00:11:41.982 "dhchap_key": "key2", 00:11:41.982 "method": "bdev_nvme_attach_controller", 00:11:41.982 "req_id": 1 00:11:41.982 } 00:11:41.982 Got JSON-RPC error response 00:11:41.982 response: 00:11:41.982 { 00:11:41.982 "code": -5, 00:11:41.982 "message": "Input/output error" 00:11:41.982 } 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:41.982 20:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:42.551 request: 00:11:42.551 { 00:11:42.551 "name": "nvme0", 00:11:42.551 "trtype": "tcp", 00:11:42.551 "traddr": "10.0.0.2", 00:11:42.551 "adrfam": "ipv4", 00:11:42.551 "trsvcid": "4420", 00:11:42.551 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:42.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e", 00:11:42.551 "prchk_reftag": false, 00:11:42.551 "prchk_guard": false, 00:11:42.551 "hdgst": false, 00:11:42.551 "ddgst": false, 00:11:42.551 "dhchap_key": "key1", 00:11:42.551 "dhchap_ctrlr_key": "ckey2", 00:11:42.551 "method": "bdev_nvme_attach_controller", 00:11:42.551 "req_id": 1 00:11:42.551 } 00:11:42.551 Got JSON-RPC error response 00:11:42.551 response: 00:11:42.551 { 00:11:42.551 "code": -5, 00:11:42.551 "message": "Input/output error" 00:11:42.551 } 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key1 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.551 20:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.117 request: 00:11:43.117 { 00:11:43.117 "name": "nvme0", 00:11:43.117 "trtype": "tcp", 00:11:43.117 "traddr": "10.0.0.2", 00:11:43.117 "adrfam": "ipv4", 00:11:43.117 "trsvcid": "4420", 00:11:43.117 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:43.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e", 00:11:43.117 "prchk_reftag": false, 00:11:43.117 "prchk_guard": false, 00:11:43.117 "hdgst": false, 00:11:43.117 "ddgst": false, 00:11:43.117 "dhchap_key": "key1", 00:11:43.117 "dhchap_ctrlr_key": "ckey1", 00:11:43.117 "method": "bdev_nvme_attach_controller", 00:11:43.117 "req_id": 1 00:11:43.117 } 00:11:43.117 Got JSON-RPC error response 00:11:43.117 response: 00:11:43.117 { 00:11:43.117 "code": -5, 00:11:43.117 "message": "Input/output error" 00:11:43.117 } 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69109 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69109 ']' 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69109 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69109 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:43.117 killing process with pid 69109 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69109' 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69109 00:11:43.117 20:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69109 00:11:43.117 20:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:11:43.117 20:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.117 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.117 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.377 20:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=71798 00:11:43.377 20:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:11:43.377 20:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 71798 00:11:43.377 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 71798 ']' 00:11:43.377 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.377 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.377 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.377 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.377 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 71798 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 71798 ']' 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.357 20:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.357 20:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.357 20:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:44.357 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:11:44.357 20:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.357 20:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.616 20:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.616 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:11:44.616 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.616 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:44.616 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:44.616 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:44.616 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.616 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:11:44.616 20:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.616 20:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.616 20:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.617 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.617 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:45.183 00:11:45.183 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.183 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.183 20:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.183 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.183 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.183 20:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.183 20:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.183 20:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.183 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.183 { 00:11:45.183 "cntlid": 1, 00:11:45.183 "qid": 0, 00:11:45.183 "state": "enabled", 00:11:45.183 "thread": "nvmf_tgt_poll_group_000", 00:11:45.183 "listen_address": { 00:11:45.183 "trtype": "TCP", 00:11:45.183 "adrfam": "IPv4", 00:11:45.183 "traddr": "10.0.0.2", 00:11:45.183 "trsvcid": "4420" 00:11:45.183 }, 00:11:45.183 "peer_address": { 00:11:45.183 "trtype": "TCP", 00:11:45.183 "adrfam": "IPv4", 00:11:45.183 "traddr": "10.0.0.1", 00:11:45.183 "trsvcid": "34778" 00:11:45.183 }, 00:11:45.183 "auth": { 00:11:45.183 "state": "completed", 00:11:45.183 "digest": "sha512", 00:11:45.183 "dhgroup": "ffdhe8192" 00:11:45.183 } 00:11:45.183 } 00:11:45.183 ]' 00:11:45.183 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.183 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:45.183 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.442 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:45.442 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.442 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.442 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.442 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.700 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid 69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-secret DHHC-1:03:M2VhMzk1YmU2MGU0NTFiZDQ2ZGQxODJmNDZlZDA1MGNmNmQzYTE4NGRkYzc0ZmRlZTA3OGVmNjI2OWZiMmRkYmG6x40=: 00:11:46.268 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.268 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:46.268 20:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.268 20:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.268 20:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.268 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --dhchap-key key3 00:11:46.268 20:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.268 20:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.268 20:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.268 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:11:46.268 20:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:11:46.268 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:46.268 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:11:46.268 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:46.268 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:11:46.268 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.268 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:11:46.268 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.268 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:46.268 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:46.527 request: 00:11:46.527 { 00:11:46.527 "name": "nvme0", 00:11:46.527 "trtype": "tcp", 00:11:46.527 "traddr": "10.0.0.2", 00:11:46.527 "adrfam": "ipv4", 00:11:46.527 "trsvcid": "4420", 00:11:46.527 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:46.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e", 00:11:46.527 "prchk_reftag": false, 00:11:46.527 "prchk_guard": false, 00:11:46.527 "hdgst": false, 00:11:46.527 "ddgst": false, 00:11:46.527 "dhchap_key": "key3", 00:11:46.527 "method": "bdev_nvme_attach_controller", 00:11:46.527 "req_id": 1 00:11:46.527 } 00:11:46.527 Got JSON-RPC error response 00:11:46.527 response: 00:11:46.527 { 00:11:46.527 "code": -5, 00:11:46.527 "message": "Input/output error" 00:11:46.527 } 00:11:46.527 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:11:46.527 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:46.527 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:46.527 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:46.527 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:11:46.527 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:11:46.527 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:46.527 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:46.786 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:46.786 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:11:46.786 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:46.786 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:11:46.786 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.786 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:11:46.786 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.786 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:46.786 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:47.044 request: 00:11:47.044 { 00:11:47.044 "name": "nvme0", 00:11:47.044 "trtype": "tcp", 00:11:47.044 "traddr": "10.0.0.2", 00:11:47.044 "adrfam": "ipv4", 00:11:47.044 "trsvcid": "4420", 00:11:47.044 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:47.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e", 00:11:47.044 "prchk_reftag": false, 00:11:47.044 "prchk_guard": false, 00:11:47.044 "hdgst": false, 00:11:47.044 "ddgst": false, 00:11:47.044 "dhchap_key": "key3", 00:11:47.044 "method": "bdev_nvme_attach_controller", 00:11:47.045 "req_id": 1 00:11:47.045 } 00:11:47.045 Got JSON-RPC error response 00:11:47.045 response: 00:11:47.045 { 00:11:47.045 "code": -5, 00:11:47.045 "message": "Input/output error" 00:11:47.045 } 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:47.045 20:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:47.309 request: 00:11:47.309 { 00:11:47.309 "name": "nvme0", 00:11:47.309 "trtype": "tcp", 00:11:47.309 "traddr": "10.0.0.2", 00:11:47.309 "adrfam": "ipv4", 00:11:47.309 "trsvcid": "4420", 00:11:47.309 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:47.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e", 00:11:47.309 "prchk_reftag": false, 00:11:47.309 "prchk_guard": false, 00:11:47.309 "hdgst": false, 00:11:47.309 "ddgst": false, 00:11:47.309 "dhchap_key": "key0", 00:11:47.309 "dhchap_ctrlr_key": "key1", 00:11:47.309 "method": "bdev_nvme_attach_controller", 00:11:47.309 "req_id": 1 00:11:47.309 } 00:11:47.309 Got JSON-RPC error response 00:11:47.309 response: 00:11:47.309 { 00:11:47.309 "code": -5, 00:11:47.309 "message": "Input/output error" 00:11:47.309 } 00:11:47.309 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:11:47.309 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.309 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.309 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.309 20:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:11:47.309 20:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:11:47.568 00:11:47.568 20:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:11:47.568 20:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:11:47.568 20:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.826 20:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.826 20:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.826 20:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69136 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69136 ']' 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69136 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69136 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:48.085 killing process with pid 69136 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69136' 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69136 00:11:48.085 20:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69136 00:11:48.344 20:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:11:48.344 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:48.344 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:11:48.344 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.344 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:11:48.344 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.344 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.344 rmmod nvme_tcp 00:11:48.344 rmmod nvme_fabrics 00:11:48.344 rmmod nvme_keyring 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 71798 ']' 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 71798 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 71798 ']' 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 71798 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71798 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:48.602 killing process with pid 71798 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71798' 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 71798 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 71798 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.602 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.603 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.603 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.860 20:48:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:48.860 20:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ZEf /tmp/spdk.key-sha256.w4g /tmp/spdk.key-sha384.cUr /tmp/spdk.key-sha512.5k6 /tmp/spdk.key-sha512.Wa0 /tmp/spdk.key-sha384.gGF /tmp/spdk.key-sha256.9h7 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:11:48.860 00:11:48.860 real 2m14.504s 00:11:48.860 user 5m9.047s 00:11:48.860 sys 0m28.718s 00:11:48.860 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.860 ************************************ 00:11:48.860 END TEST nvmf_auth_target 00:11:48.860 20:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.860 ************************************ 00:11:48.860 20:48:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:48.860 20:48:10 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:11:48.860 20:48:10 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:48.860 20:48:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:48.860 20:48:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.860 20:48:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:48.860 ************************************ 00:11:48.860 START TEST nvmf_bdevio_no_huge 00:11:48.860 ************************************ 00:11:48.860 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:48.860 * Looking for test storage... 00:11:48.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:48.860 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:49.119 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:49.120 Cannot find device "nvmf_tgt_br" 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:49.120 Cannot find device "nvmf_tgt_br2" 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:49.120 Cannot find device "nvmf_tgt_br" 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:49.120 Cannot find device "nvmf_tgt_br2" 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:49.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:49.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:49.120 20:48:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:49.120 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:49.120 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:49.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:11:49.378 00:11:49.378 --- 10.0.0.2 ping statistics --- 00:11:49.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.378 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:49.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:49.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.027 ms 00:11:49.378 00:11:49.378 --- 10.0.0.3 ping statistics --- 00:11:49.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.378 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:49.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:11:49.378 00:11:49.378 --- 10.0.0.1 ping statistics --- 00:11:49.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.378 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:49.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72102 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72102 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72102 ']' 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:49.378 20:48:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:49.636 [2024-07-15 20:48:11.289618] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:11:49.636 [2024-07-15 20:48:11.289682] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:49.636 [2024-07-15 20:48:11.429750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.893 [2024-07-15 20:48:11.552745] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.893 [2024-07-15 20:48:11.552804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.893 [2024-07-15 20:48:11.552815] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.893 [2024-07-15 20:48:11.552823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.893 [2024-07-15 20:48:11.552831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.893 [2024-07-15 20:48:11.553829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:49.893 [2024-07-15 20:48:11.554033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:49.894 [2024-07-15 20:48:11.554232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:49.894 [2024-07-15 20:48:11.554364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.894 [2024-07-15 20:48:11.558037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:50.459 [2024-07-15 20:48:12.166332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:50.459 Malloc0 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:50.459 [2024-07-15 20:48:12.222440] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:50.459 { 00:11:50.459 "params": { 00:11:50.459 "name": "Nvme$subsystem", 00:11:50.459 "trtype": "$TEST_TRANSPORT", 00:11:50.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:50.459 "adrfam": "ipv4", 00:11:50.459 "trsvcid": "$NVMF_PORT", 00:11:50.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:50.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:50.459 "hdgst": ${hdgst:-false}, 00:11:50.459 "ddgst": ${ddgst:-false} 00:11:50.459 }, 00:11:50.459 "method": "bdev_nvme_attach_controller" 00:11:50.459 } 00:11:50.459 EOF 00:11:50.459 )") 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:11:50.459 20:48:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:50.459 "params": { 00:11:50.459 "name": "Nvme1", 00:11:50.459 "trtype": "tcp", 00:11:50.459 "traddr": "10.0.0.2", 00:11:50.459 "adrfam": "ipv4", 00:11:50.459 "trsvcid": "4420", 00:11:50.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:50.459 "hdgst": false, 00:11:50.459 "ddgst": false 00:11:50.459 }, 00:11:50.459 "method": "bdev_nvme_attach_controller" 00:11:50.459 }' 00:11:50.459 [2024-07-15 20:48:12.266513] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:11:50.459 [2024-07-15 20:48:12.266577] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72134 ] 00:11:50.717 [2024-07-15 20:48:12.402009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:50.717 [2024-07-15 20:48:12.524616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.717 [2024-07-15 20:48:12.524799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.717 [2024-07-15 20:48:12.524801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.717 [2024-07-15 20:48:12.536868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:50.977 I/O targets: 00:11:50.977 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:50.977 00:11:50.977 00:11:50.977 CUnit - A unit testing framework for C - Version 2.1-3 00:11:50.977 http://cunit.sourceforge.net/ 00:11:50.977 00:11:50.977 00:11:50.977 Suite: bdevio tests on: Nvme1n1 00:11:50.977 Test: blockdev write read block ...passed 00:11:50.977 Test: blockdev write zeroes read block ...passed 00:11:50.977 Test: blockdev write zeroes read no split ...passed 00:11:50.977 Test: blockdev write zeroes read split ...passed 00:11:50.977 Test: blockdev write zeroes read split partial ...passed 00:11:50.977 Test: blockdev reset ...[2024-07-15 20:48:12.728640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:50.977 [2024-07-15 20:48:12.728842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1848870 (9): Bad file descriptor 00:11:50.977 [2024-07-15 20:48:12.749033] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:50.977 passed 00:11:50.977 Test: blockdev write read 8 blocks ...passed 00:11:50.977 Test: blockdev write read size > 128k ...passed 00:11:50.977 Test: blockdev write read invalid size ...passed 00:11:50.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:50.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:50.977 Test: blockdev write read max offset ...passed 00:11:50.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:50.977 Test: blockdev writev readv 8 blocks ...passed 00:11:50.977 Test: blockdev writev readv 30 x 1block ...passed 00:11:50.977 Test: blockdev writev readv block ...passed 00:11:50.977 Test: blockdev writev readv size > 128k ...passed 00:11:50.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:50.977 Test: blockdev comparev and writev ...[2024-07-15 20:48:12.757433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.977 [2024-07-15 20:48:12.757572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:50.977 [2024-07-15 20:48:12.757595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.977 [2024-07-15 20:48:12.757605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:50.977 [2024-07-15 20:48:12.758012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.977 [2024-07-15 20:48:12.758027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:50.977 [2024-07-15 20:48:12.758041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.977 [2024-07-15 20:48:12.758050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:50.977 [2024-07-15 20:48:12.758317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.977 [2024-07-15 20:48:12.758332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:50.977 [2024-07-15 20:48:12.758345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.977 [2024-07-15 20:48:12.758354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:50.977 [2024-07-15 20:48:12.758913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.977 [2024-07-15 20:48:12.758932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:50.977 [2024-07-15 20:48:12.758946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:50.977 [2024-07-15 20:48:12.758954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:50.977 passed 00:11:50.977 Test: blockdev nvme passthru rw ...passed 00:11:50.977 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:48:12.759714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.977 [2024-07-15 20:48:12.759731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:50.977 [2024-07-15 20:48:12.759805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.977 [2024-07-15 20:48:12.759816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:50.977 [2024-07-15 20:48:12.759892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.977 [2024-07-15 20:48:12.759902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:50.977 [2024-07-15 20:48:12.759982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:50.977 [2024-07-15 20:48:12.759992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:50.977 passed 00:11:50.977 Test: blockdev nvme admin passthru ...passed 00:11:50.977 Test: blockdev copy ...passed 00:11:50.977 00:11:50.977 Run Summary: Type Total Ran Passed Failed Inactive 00:11:50.977 suites 1 1 n/a 0 0 00:11:50.977 tests 23 23 23 0 0 00:11:50.977 asserts 152 152 152 0 n/a 00:11:50.977 00:11:50.977 Elapsed time = 0.181 seconds 00:11:51.236 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.236 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.236 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:51.236 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.236 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:51.236 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:11:51.236 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:51.236 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:51.494 rmmod nvme_tcp 00:11:51.494 rmmod nvme_fabrics 00:11:51.494 rmmod nvme_keyring 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72102 ']' 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72102 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72102 ']' 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72102 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:11:51.494 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:51.495 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72102 00:11:51.495 killing process with pid 72102 00:11:51.495 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:51.495 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:51.495 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72102' 00:11:51.495 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72102 00:11:51.495 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72102 00:11:51.752 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:51.752 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:51.752 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:51.752 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:51.752 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:51.752 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.752 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.752 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.011 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:52.011 00:11:52.011 real 0m3.068s 00:11:52.011 user 0m9.322s 00:11:52.011 sys 0m1.356s 00:11:52.011 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.011 20:48:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:52.011 ************************************ 00:11:52.011 END TEST nvmf_bdevio_no_huge 00:11:52.011 ************************************ 00:11:52.011 20:48:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:52.011 20:48:13 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:52.011 20:48:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:52.011 20:48:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.011 20:48:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:52.011 ************************************ 00:11:52.011 START TEST nvmf_tls 00:11:52.011 ************************************ 00:11:52.011 20:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:52.011 * Looking for test storage... 00:11:52.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:52.011 20:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:52.011 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:11:52.011 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.011 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.011 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.011 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.011 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:52.270 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:52.271 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:52.271 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:52.271 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:52.271 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.271 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:52.271 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:52.271 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:52.271 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:52.271 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:52.271 20:48:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:52.271 Cannot find device "nvmf_tgt_br" 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.271 Cannot find device "nvmf_tgt_br2" 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:52.271 Cannot find device "nvmf_tgt_br" 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:52.271 Cannot find device "nvmf_tgt_br2" 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:52.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:52.271 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:52.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:11:52.529 00:11:52.529 --- 10.0.0.2 ping statistics --- 00:11:52.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.529 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:52.529 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:52.529 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:11:52.529 00:11:52.529 --- 10.0.0.3 ping statistics --- 00:11:52.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.529 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:52.529 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:52.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:11:52.529 00:11:52.529 --- 10.0.0.1 ping statistics --- 00:11:52.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.530 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:52.530 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.530 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:11:52.530 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:52.530 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.530 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:52.530 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:52.530 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.530 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:52.530 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:52.788 20:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72317 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72317 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72317 ']' 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.789 20:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:52.789 [2024-07-15 20:48:14.533815] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:11:52.789 [2024-07-15 20:48:14.533901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.789 [2024-07-15 20:48:14.684511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.048 [2024-07-15 20:48:14.764063] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.048 [2024-07-15 20:48:14.764109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.048 [2024-07-15 20:48:14.764119] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.048 [2024-07-15 20:48:14.764127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.048 [2024-07-15 20:48:14.764134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.048 [2024-07-15 20:48:14.764158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.617 20:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.617 20:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:11:53.617 20:48:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.617 20:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:53.617 20:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:53.617 20:48:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.617 20:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:11:53.617 20:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:53.876 true 00:11:53.876 20:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:53.876 20:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:11:54.135 20:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:11:54.135 20:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:11:54.135 20:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:54.135 20:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:54.135 20:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:11:54.393 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:11:54.393 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:11:54.393 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:54.651 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:54.651 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:11:54.651 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:11:54.651 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:11:54.651 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:54.651 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:11:54.909 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:11:54.909 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:11:54.909 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:55.210 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:11:55.210 20:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:55.210 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:11:55.210 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:11:55.210 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:55.468 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:11:55.468 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.rAO7dqCM6w 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.RZiFpHS4Ag 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.rAO7dqCM6w 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.RZiFpHS4Ag 00:11:55.726 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:55.984 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:56.241 [2024-07-15 20:48:17.916583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:56.241 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.rAO7dqCM6w 00:11:56.241 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rAO7dqCM6w 00:11:56.241 20:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:56.241 [2024-07-15 20:48:18.142566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.500 20:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:56.500 20:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:56.759 [2024-07-15 20:48:18.526023] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:56.759 [2024-07-15 20:48:18.526208] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.759 20:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:57.017 malloc0 00:11:57.017 20:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:57.017 20:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rAO7dqCM6w 00:11:57.276 [2024-07-15 20:48:19.074374] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:11:57.276 20:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rAO7dqCM6w 00:12:09.482 Initializing NVMe Controllers 00:12:09.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:09.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:09.482 Initialization complete. Launching workers. 00:12:09.482 ======================================================== 00:12:09.482 Latency(us) 00:12:09.482 Device Information : IOPS MiB/s Average min max 00:12:09.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15108.66 59.02 4236.52 868.80 6330.57 00:12:09.482 ======================================================== 00:12:09.482 Total : 15108.66 59.02 4236.52 868.80 6330.57 00:12:09.482 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rAO7dqCM6w 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rAO7dqCM6w' 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72537 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72537 /var/tmp/bdevperf.sock 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72537 ']' 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:09.482 20:48:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:09.482 [2024-07-15 20:48:29.329581] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:09.482 [2024-07-15 20:48:29.329651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72537 ] 00:12:09.482 [2024-07-15 20:48:29.470802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.482 [2024-07-15 20:48:29.554986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.482 [2024-07-15 20:48:29.595880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:09.482 20:48:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.482 20:48:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:09.482 20:48:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rAO7dqCM6w 00:12:09.482 [2024-07-15 20:48:30.308375] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:09.482 [2024-07-15 20:48:30.308491] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:09.482 TLSTESTn1 00:12:09.482 20:48:30 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:09.482 Running I/O for 10 seconds... 00:12:19.515 00:12:19.515 Latency(us) 00:12:19.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.515 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:19.515 Verification LBA range: start 0x0 length 0x2000 00:12:19.515 TLSTESTn1 : 10.01 5827.73 22.76 0.00 0.00 21929.52 4316.43 29267.48 00:12:19.515 =================================================================================================================== 00:12:19.515 Total : 5827.73 22.76 0.00 0.00 21929.52 4316.43 29267.48 00:12:19.515 0 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 72537 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72537 ']' 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72537 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72537 00:12:19.515 killing process with pid 72537 00:12:19.515 Received shutdown signal, test time was about 10.000000 seconds 00:12:19.515 00:12:19.515 Latency(us) 00:12:19.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.515 =================================================================================================================== 00:12:19.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72537' 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72537 00:12:19.515 [2024-07-15 20:48:40.542577] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72537 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RZiFpHS4Ag 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RZiFpHS4Ag 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RZiFpHS4Ag 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RZiFpHS4Ag' 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72665 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72665 /var/tmp/bdevperf.sock 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72665 ']' 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:19.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.515 20:48:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:19.515 [2024-07-15 20:48:40.785045] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:19.515 [2024-07-15 20:48:40.785238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72665 ] 00:12:19.515 [2024-07-15 20:48:40.925311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.515 [2024-07-15 20:48:41.010239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.515 [2024-07-15 20:48:41.051785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:19.774 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.774 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:19.774 20:48:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RZiFpHS4Ag 00:12:20.032 [2024-07-15 20:48:41.793353] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:20.032 [2024-07-15 20:48:41.793460] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:20.032 [2024-07-15 20:48:41.803644] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:20.032 [2024-07-15 20:48:41.804589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8301f0 (107): Transport endpoint is not connected 00:12:20.032 [2024-07-15 20:48:41.805573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8301f0 (9): Bad file descriptor 00:12:20.032 [2024-07-15 20:48:41.806569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:20.032 [2024-07-15 20:48:41.806588] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:20.032 [2024-07-15 20:48:41.806600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:20.032 request: 00:12:20.032 { 00:12:20.032 "name": "TLSTEST", 00:12:20.032 "trtype": "tcp", 00:12:20.032 "traddr": "10.0.0.2", 00:12:20.032 "adrfam": "ipv4", 00:12:20.032 "trsvcid": "4420", 00:12:20.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:20.032 "prchk_reftag": false, 00:12:20.032 "prchk_guard": false, 00:12:20.032 "hdgst": false, 00:12:20.032 "ddgst": false, 00:12:20.032 "psk": "/tmp/tmp.RZiFpHS4Ag", 00:12:20.032 "method": "bdev_nvme_attach_controller", 00:12:20.032 "req_id": 1 00:12:20.032 } 00:12:20.032 Got JSON-RPC error response 00:12:20.032 response: 00:12:20.032 { 00:12:20.032 "code": -5, 00:12:20.032 "message": "Input/output error" 00:12:20.032 } 00:12:20.032 20:48:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72665 00:12:20.032 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72665 ']' 00:12:20.032 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72665 00:12:20.032 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:20.032 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:20.032 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72665 00:12:20.032 killing process with pid 72665 00:12:20.032 Received shutdown signal, test time was about 10.000000 seconds 00:12:20.032 00:12:20.032 Latency(us) 00:12:20.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.032 =================================================================================================================== 00:12:20.032 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:20.032 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:20.032 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:20.032 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72665' 00:12:20.032 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72665 00:12:20.032 [2024-07-15 20:48:41.853736] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:20.032 20:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72665 00:12:20.289 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:20.289 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:20.289 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:20.289 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rAO7dqCM6w 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rAO7dqCM6w 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rAO7dqCM6w 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rAO7dqCM6w' 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72687 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72687 /var/tmp/bdevperf.sock 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72687 ']' 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:20.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.290 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:20.290 [2024-07-15 20:48:42.085682] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:20.290 [2024-07-15 20:48:42.085758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72687 ] 00:12:20.548 [2024-07-15 20:48:42.214873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.548 [2024-07-15 20:48:42.313293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.548 [2024-07-15 20:48:42.355040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:21.113 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.113 20:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:21.114 20:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.rAO7dqCM6w 00:12:21.372 [2024-07-15 20:48:43.109468] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:21.372 [2024-07-15 20:48:43.109877] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:21.372 [2024-07-15 20:48:43.114413] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:21.372 [2024-07-15 20:48:43.114611] posix.c: 552:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:21.372 [2024-07-15 20:48:43.114763] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:21.372 [2024-07-15 20:48:43.115178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf31f0 (107): Transport endpoint is not connected 00:12:21.372 [2024-07-15 20:48:43.116154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf31f0 (9): Bad file descriptor 00:12:21.372 [2024-07-15 20:48:43.117148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:21.372 [2024-07-15 20:48:43.117181] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:21.372 [2024-07-15 20:48:43.117194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:21.372 request: 00:12:21.372 { 00:12:21.372 "name": "TLSTEST", 00:12:21.372 "trtype": "tcp", 00:12:21.372 "traddr": "10.0.0.2", 00:12:21.372 "adrfam": "ipv4", 00:12:21.372 "trsvcid": "4420", 00:12:21.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:21.372 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:21.372 "prchk_reftag": false, 00:12:21.372 "prchk_guard": false, 00:12:21.372 "hdgst": false, 00:12:21.372 "ddgst": false, 00:12:21.372 "psk": "/tmp/tmp.rAO7dqCM6w", 00:12:21.372 "method": "bdev_nvme_attach_controller", 00:12:21.372 "req_id": 1 00:12:21.372 } 00:12:21.372 Got JSON-RPC error response 00:12:21.372 response: 00:12:21.372 { 00:12:21.372 "code": -5, 00:12:21.372 "message": "Input/output error" 00:12:21.372 } 00:12:21.372 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72687 00:12:21.372 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72687 ']' 00:12:21.372 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72687 00:12:21.372 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:21.372 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.372 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72687 00:12:21.372 killing process with pid 72687 00:12:21.372 Received shutdown signal, test time was about 10.000000 seconds 00:12:21.372 00:12:21.372 Latency(us) 00:12:21.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.372 =================================================================================================================== 00:12:21.372 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:21.372 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:21.372 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:21.372 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72687' 00:12:21.372 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72687 00:12:21.372 [2024-07-15 20:48:43.176333] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:21.372 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72687 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rAO7dqCM6w 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rAO7dqCM6w 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rAO7dqCM6w 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rAO7dqCM6w' 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72720 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72720 /var/tmp/bdevperf.sock 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72720 ']' 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.629 20:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:21.629 [2024-07-15 20:48:43.407613] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:21.629 [2024-07-15 20:48:43.407683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72720 ] 00:12:21.629 [2024-07-15 20:48:43.538869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.887 [2024-07-15 20:48:43.618640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.887 [2024-07-15 20:48:43.660096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:22.453 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.453 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:22.454 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rAO7dqCM6w 00:12:22.712 [2024-07-15 20:48:44.409102] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:22.712 [2024-07-15 20:48:44.409416] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:22.712 [2024-07-15 20:48:44.419523] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:22.712 [2024-07-15 20:48:44.419696] posix.c: 552:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:22.712 [2024-07-15 20:48:44.419830] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:22.712 [2024-07-15 20:48:44.420683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13631f0 (107): Transport endpoint is not connected 00:12:22.712 [2024-07-15 20:48:44.421669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13631f0 (9): Bad file descriptor 00:12:22.712 [2024-07-15 20:48:44.422665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:22.712 [2024-07-15 20:48:44.422685] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:22.712 [2024-07-15 20:48:44.422698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:22.712 request: 00:12:22.712 { 00:12:22.712 "name": "TLSTEST", 00:12:22.712 "trtype": "tcp", 00:12:22.712 "traddr": "10.0.0.2", 00:12:22.712 "adrfam": "ipv4", 00:12:22.712 "trsvcid": "4420", 00:12:22.712 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:22.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.712 "prchk_reftag": false, 00:12:22.712 "prchk_guard": false, 00:12:22.712 "hdgst": false, 00:12:22.712 "ddgst": false, 00:12:22.712 "psk": "/tmp/tmp.rAO7dqCM6w", 00:12:22.712 "method": "bdev_nvme_attach_controller", 00:12:22.712 "req_id": 1 00:12:22.712 } 00:12:22.712 Got JSON-RPC error response 00:12:22.712 response: 00:12:22.712 { 00:12:22.712 "code": -5, 00:12:22.712 "message": "Input/output error" 00:12:22.712 } 00:12:22.712 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72720 00:12:22.712 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72720 ']' 00:12:22.712 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72720 00:12:22.712 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:22.712 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:22.712 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72720 00:12:22.712 killing process with pid 72720 00:12:22.712 Received shutdown signal, test time was about 10.000000 seconds 00:12:22.712 00:12:22.712 Latency(us) 00:12:22.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.712 =================================================================================================================== 00:12:22.712 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:22.713 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:22.713 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:22.713 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72720' 00:12:22.713 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72720 00:12:22.713 [2024-07-15 20:48:44.483159] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:22.713 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72720 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72742 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72742 /var/tmp/bdevperf.sock 00:12:22.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72742 ']' 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.972 20:48:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:22.972 [2024-07-15 20:48:44.719402] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:22.972 [2024-07-15 20:48:44.719577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72742 ] 00:12:22.972 [2024-07-15 20:48:44.859554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.230 [2024-07-15 20:48:44.943683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.231 [2024-07-15 20:48:44.985346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:23.825 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.825 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:23.825 20:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:23.825 [2024-07-15 20:48:45.727582] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:24.084 [2024-07-15 20:48:45.729658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x834c00 (9): Bad file descriptor 00:12:24.084 [2024-07-15 20:48:45.730652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:24.084 [2024-07-15 20:48:45.730674] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:24.084 [2024-07-15 20:48:45.730686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:24.084 request: 00:12:24.084 { 00:12:24.084 "name": "TLSTEST", 00:12:24.084 "trtype": "tcp", 00:12:24.084 "traddr": "10.0.0.2", 00:12:24.084 "adrfam": "ipv4", 00:12:24.084 "trsvcid": "4420", 00:12:24.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:24.084 "prchk_reftag": false, 00:12:24.084 "prchk_guard": false, 00:12:24.084 "hdgst": false, 00:12:24.084 "ddgst": false, 00:12:24.084 "method": "bdev_nvme_attach_controller", 00:12:24.084 "req_id": 1 00:12:24.084 } 00:12:24.084 Got JSON-RPC error response 00:12:24.084 response: 00:12:24.084 { 00:12:24.084 "code": -5, 00:12:24.084 "message": "Input/output error" 00:12:24.084 } 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72742 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72742 ']' 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72742 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72742 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72742' 00:12:24.084 killing process with pid 72742 00:12:24.084 Received shutdown signal, test time was about 10.000000 seconds 00:12:24.084 00:12:24.084 Latency(us) 00:12:24.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.084 =================================================================================================================== 00:12:24.084 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72742 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72742 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72317 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72317 ']' 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72317 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:24.084 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.085 20:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72317 00:12:24.344 killing process with pid 72317 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72317' 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72317 00:12:24.344 [2024-07-15 20:48:46.006233] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72317 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:12:24.344 20:48:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.CnXej2IDDC 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.CnXej2IDDC 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72774 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72774 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72774 ']' 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.603 20:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:24.603 [2024-07-15 20:48:46.323477] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:24.603 [2024-07-15 20:48:46.323542] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.603 [2024-07-15 20:48:46.466281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.862 [2024-07-15 20:48:46.548343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.862 [2024-07-15 20:48:46.548394] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.862 [2024-07-15 20:48:46.548415] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.862 [2024-07-15 20:48:46.548423] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.862 [2024-07-15 20:48:46.548445] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.862 [2024-07-15 20:48:46.548468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.862 [2024-07-15 20:48:46.589440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:25.429 20:48:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.429 20:48:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:25.429 20:48:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:25.429 20:48:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:25.429 20:48:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:25.429 20:48:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.429 20:48:47 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.CnXej2IDDC 00:12:25.429 20:48:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CnXej2IDDC 00:12:25.429 20:48:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:25.687 [2024-07-15 20:48:47.382307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.687 20:48:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:25.687 20:48:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:25.946 [2024-07-15 20:48:47.761794] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:25.946 [2024-07-15 20:48:47.761979] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.946 20:48:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:26.204 malloc0 00:12:26.204 20:48:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CnXej2IDDC 00:12:26.463 [2024-07-15 20:48:48.329738] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CnXej2IDDC 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.CnXej2IDDC' 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72823 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72823 /var/tmp/bdevperf.sock 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72823 ']' 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.463 20:48:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:26.720 [2024-07-15 20:48:48.396268] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:26.720 [2024-07-15 20:48:48.396519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72823 ] 00:12:26.720 [2024-07-15 20:48:48.535720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.977 [2024-07-15 20:48:48.634429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.977 [2024-07-15 20:48:48.676432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:27.565 20:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.565 20:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:27.565 20:48:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CnXej2IDDC 00:12:27.565 [2024-07-15 20:48:49.415234] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:27.565 [2024-07-15 20:48:49.415361] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:27.822 TLSTESTn1 00:12:27.822 20:48:49 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:27.822 Running I/O for 10 seconds... 00:12:37.789 00:12:37.789 Latency(us) 00:12:37.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.789 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:37.789 Verification LBA range: start 0x0 length 0x2000 00:12:37.789 TLSTESTn1 : 10.01 5839.47 22.81 0.00 0.00 21885.89 4553.30 16739.32 00:12:37.789 =================================================================================================================== 00:12:37.789 Total : 5839.47 22.81 0.00 0.00 21885.89 4553.30 16739.32 00:12:37.789 0 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 72823 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72823 ']' 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72823 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72823 00:12:37.789 killing process with pid 72823 00:12:37.789 Received shutdown signal, test time was about 10.000000 seconds 00:12:37.789 00:12:37.789 Latency(us) 00:12:37.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.789 =================================================================================================================== 00:12:37.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72823' 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72823 00:12:37.789 [2024-07-15 20:48:59.669562] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:37.789 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72823 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.CnXej2IDDC 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CnXej2IDDC 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CnXej2IDDC 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CnXej2IDDC 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.CnXej2IDDC' 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72958 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72958 /var/tmp/bdevperf.sock 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72958 ']' 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:38.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.047 20:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:38.047 [2024-07-15 20:48:59.904234] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:38.047 [2024-07-15 20:48:59.904411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72958 ] 00:12:38.306 [2024-07-15 20:49:00.049098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.306 [2024-07-15 20:49:00.136056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.306 [2024-07-15 20:49:00.177737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:39.239 20:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.239 20:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:39.239 20:49:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CnXej2IDDC 00:12:39.239 [2024-07-15 20:49:00.946696] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:39.239 [2024-07-15 20:49:00.947279] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:39.239 [2024-07-15 20:49:00.947401] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.CnXej2IDDC 00:12:39.239 request: 00:12:39.239 { 00:12:39.239 "name": "TLSTEST", 00:12:39.239 "trtype": "tcp", 00:12:39.239 "traddr": "10.0.0.2", 00:12:39.239 "adrfam": "ipv4", 00:12:39.239 "trsvcid": "4420", 00:12:39.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:39.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:39.239 "prchk_reftag": false, 00:12:39.239 "prchk_guard": false, 00:12:39.239 "hdgst": false, 00:12:39.239 "ddgst": false, 00:12:39.239 "psk": "/tmp/tmp.CnXej2IDDC", 00:12:39.239 "method": "bdev_nvme_attach_controller", 00:12:39.239 "req_id": 1 00:12:39.239 } 00:12:39.239 Got JSON-RPC error response 00:12:39.239 response: 00:12:39.239 { 00:12:39.239 "code": -1, 00:12:39.239 "message": "Operation not permitted" 00:12:39.239 } 00:12:39.239 20:49:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72958 00:12:39.239 20:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72958 ']' 00:12:39.239 20:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72958 00:12:39.239 20:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:39.239 20:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:39.239 20:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72958 00:12:39.239 killing process with pid 72958 00:12:39.239 Received shutdown signal, test time was about 10.000000 seconds 00:12:39.239 00:12:39.239 Latency(us) 00:12:39.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.239 =================================================================================================================== 00:12:39.239 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:39.239 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:39.239 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:39.239 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72958' 00:12:39.239 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72958 00:12:39.239 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72958 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 72774 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72774 ']' 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72774 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72774 00:12:39.498 killing process with pid 72774 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72774' 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72774 00:12:39.498 [2024-07-15 20:49:01.217791] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:39.498 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72774 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72990 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72990 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72990 ']' 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.756 20:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:39.757 [2024-07-15 20:49:01.471213] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:39.757 [2024-07-15 20:49:01.471782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.757 [2024-07-15 20:49:01.611777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.015 [2024-07-15 20:49:01.686129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.015 [2024-07-15 20:49:01.686192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.015 [2024-07-15 20:49:01.686202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.015 [2024-07-15 20:49:01.686210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.015 [2024-07-15 20:49:01.686216] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.015 [2024-07-15 20:49:01.686247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.015 [2024-07-15 20:49:01.727191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.CnXej2IDDC 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.CnXej2IDDC 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.CnXej2IDDC 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CnXej2IDDC 00:12:40.581 20:49:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:40.839 [2024-07-15 20:49:02.531935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.839 20:49:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:40.839 20:49:02 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:41.099 [2024-07-15 20:49:02.879409] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:41.099 [2024-07-15 20:49:02.879580] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.099 20:49:02 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:41.358 malloc0 00:12:41.358 20:49:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:41.358 20:49:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CnXej2IDDC 00:12:41.615 [2024-07-15 20:49:03.407472] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:41.615 [2024-07-15 20:49:03.407509] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:41.615 [2024-07-15 20:49:03.407535] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:12:41.615 request: 00:12:41.615 { 00:12:41.615 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.615 "host": "nqn.2016-06.io.spdk:host1", 00:12:41.615 "psk": "/tmp/tmp.CnXej2IDDC", 00:12:41.615 "method": "nvmf_subsystem_add_host", 00:12:41.615 "req_id": 1 00:12:41.615 } 00:12:41.615 Got JSON-RPC error response 00:12:41.615 response: 00:12:41.615 { 00:12:41.615 "code": -32603, 00:12:41.615 "message": "Internal error" 00:12:41.615 } 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 72990 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72990 ']' 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72990 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72990 00:12:41.615 killing process with pid 72990 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72990' 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72990 00:12:41.615 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72990 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.CnXej2IDDC 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73047 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73047 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73047 ']' 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.872 20:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.872 [2024-07-15 20:49:03.724145] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:41.872 [2024-07-15 20:49:03.724239] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.129 [2024-07-15 20:49:03.864351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.129 [2024-07-15 20:49:03.941156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.129 [2024-07-15 20:49:03.941215] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.129 [2024-07-15 20:49:03.941224] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.129 [2024-07-15 20:49:03.941232] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.129 [2024-07-15 20:49:03.941239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.129 [2024-07-15 20:49:03.941263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.129 [2024-07-15 20:49:03.982113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:42.695 20:49:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.695 20:49:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:42.695 20:49:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:42.695 20:49:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:42.695 20:49:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.985 20:49:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.985 20:49:04 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.CnXej2IDDC 00:12:42.985 20:49:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CnXej2IDDC 00:12:42.985 20:49:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:42.985 [2024-07-15 20:49:04.775087] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.985 20:49:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:43.244 20:49:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:43.502 [2024-07-15 20:49:05.154526] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:43.502 [2024-07-15 20:49:05.154716] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.502 20:49:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:43.502 malloc0 00:12:43.502 20:49:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:43.761 20:49:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CnXej2IDDC 00:12:44.019 [2024-07-15 20:49:05.690534] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:44.019 20:49:05 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73099 00:12:44.019 20:49:05 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:44.019 20:49:05 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:44.019 20:49:05 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73099 /var/tmp/bdevperf.sock 00:12:44.019 20:49:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73099 ']' 00:12:44.019 20:49:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:44.019 20:49:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.020 20:49:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:44.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:44.020 20:49:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.020 20:49:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:44.020 [2024-07-15 20:49:05.758858] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:44.020 [2024-07-15 20:49:05.759108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73099 ] 00:12:44.020 [2024-07-15 20:49:05.899236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.279 [2024-07-15 20:49:05.984199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.279 [2024-07-15 20:49:06.025552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:44.847 20:49:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.847 20:49:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:44.847 20:49:06 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CnXej2IDDC 00:12:45.105 [2024-07-15 20:49:06.762246] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:45.105 [2024-07-15 20:49:06.762354] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:45.105 TLSTESTn1 00:12:45.105 20:49:06 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:45.365 20:49:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:12:45.365 "subsystems": [ 00:12:45.365 { 00:12:45.365 "subsystem": "keyring", 00:12:45.365 "config": [] 00:12:45.365 }, 00:12:45.365 { 00:12:45.365 "subsystem": "iobuf", 00:12:45.365 "config": [ 00:12:45.365 { 00:12:45.365 "method": "iobuf_set_options", 00:12:45.365 "params": { 00:12:45.365 "small_pool_count": 8192, 00:12:45.365 "large_pool_count": 1024, 00:12:45.365 "small_bufsize": 8192, 00:12:45.365 "large_bufsize": 135168 00:12:45.365 } 00:12:45.365 } 00:12:45.365 ] 00:12:45.365 }, 00:12:45.365 { 00:12:45.365 "subsystem": "sock", 00:12:45.365 "config": [ 00:12:45.365 { 00:12:45.365 "method": "sock_set_default_impl", 00:12:45.365 "params": { 00:12:45.365 "impl_name": "uring" 00:12:45.365 } 00:12:45.365 }, 00:12:45.365 { 00:12:45.365 "method": "sock_impl_set_options", 00:12:45.365 "params": { 00:12:45.365 "impl_name": "ssl", 00:12:45.365 "recv_buf_size": 4096, 00:12:45.365 "send_buf_size": 4096, 00:12:45.365 "enable_recv_pipe": true, 00:12:45.365 "enable_quickack": false, 00:12:45.365 "enable_placement_id": 0, 00:12:45.365 "enable_zerocopy_send_server": true, 00:12:45.365 "enable_zerocopy_send_client": false, 00:12:45.365 "zerocopy_threshold": 0, 00:12:45.365 "tls_version": 0, 00:12:45.365 "enable_ktls": false 00:12:45.365 } 00:12:45.365 }, 00:12:45.365 { 00:12:45.365 "method": "sock_impl_set_options", 00:12:45.365 "params": { 00:12:45.365 "impl_name": "posix", 00:12:45.365 "recv_buf_size": 2097152, 00:12:45.365 "send_buf_size": 2097152, 00:12:45.365 "enable_recv_pipe": true, 00:12:45.365 "enable_quickack": false, 00:12:45.365 "enable_placement_id": 0, 00:12:45.365 "enable_zerocopy_send_server": true, 00:12:45.365 "enable_zerocopy_send_client": false, 00:12:45.365 "zerocopy_threshold": 0, 00:12:45.365 "tls_version": 0, 00:12:45.365 "enable_ktls": false 00:12:45.365 } 00:12:45.365 }, 00:12:45.365 { 00:12:45.365 "method": "sock_impl_set_options", 00:12:45.365 "params": { 00:12:45.365 "impl_name": "uring", 00:12:45.365 "recv_buf_size": 2097152, 00:12:45.365 "send_buf_size": 2097152, 00:12:45.365 "enable_recv_pipe": true, 00:12:45.365 "enable_quickack": false, 00:12:45.365 "enable_placement_id": 0, 00:12:45.365 "enable_zerocopy_send_server": false, 00:12:45.365 "enable_zerocopy_send_client": false, 00:12:45.365 "zerocopy_threshold": 0, 00:12:45.365 "tls_version": 0, 00:12:45.365 "enable_ktls": false 00:12:45.365 } 00:12:45.365 } 00:12:45.365 ] 00:12:45.365 }, 00:12:45.365 { 00:12:45.365 "subsystem": "vmd", 00:12:45.365 "config": [] 00:12:45.365 }, 00:12:45.365 { 00:12:45.365 "subsystem": "accel", 00:12:45.365 "config": [ 00:12:45.365 { 00:12:45.365 "method": "accel_set_options", 00:12:45.365 "params": { 00:12:45.365 "small_cache_size": 128, 00:12:45.365 "large_cache_size": 16, 00:12:45.365 "task_count": 2048, 00:12:45.365 "sequence_count": 2048, 00:12:45.365 "buf_count": 2048 00:12:45.365 } 00:12:45.365 } 00:12:45.365 ] 00:12:45.365 }, 00:12:45.365 { 00:12:45.365 "subsystem": "bdev", 00:12:45.365 "config": [ 00:12:45.365 { 00:12:45.365 "method": "bdev_set_options", 00:12:45.365 "params": { 00:12:45.365 "bdev_io_pool_size": 65535, 00:12:45.365 "bdev_io_cache_size": 256, 00:12:45.365 "bdev_auto_examine": true, 00:12:45.365 "iobuf_small_cache_size": 128, 00:12:45.365 "iobuf_large_cache_size": 16 00:12:45.365 } 00:12:45.365 }, 00:12:45.365 { 00:12:45.365 "method": "bdev_raid_set_options", 00:12:45.365 "params": { 00:12:45.365 "process_window_size_kb": 1024 00:12:45.365 } 00:12:45.365 }, 00:12:45.365 { 00:12:45.365 "method": "bdev_iscsi_set_options", 00:12:45.365 "params": { 00:12:45.365 "timeout_sec": 30 00:12:45.365 } 00:12:45.365 }, 00:12:45.365 { 00:12:45.365 "method": "bdev_nvme_set_options", 00:12:45.365 "params": { 00:12:45.365 "action_on_timeout": "none", 00:12:45.365 "timeout_us": 0, 00:12:45.365 "timeout_admin_us": 0, 00:12:45.365 "keep_alive_timeout_ms": 10000, 00:12:45.365 "arbitration_burst": 0, 00:12:45.365 "low_priority_weight": 0, 00:12:45.365 "medium_priority_weight": 0, 00:12:45.365 "high_priority_weight": 0, 00:12:45.365 "nvme_adminq_poll_period_us": 10000, 00:12:45.365 "nvme_ioq_poll_period_us": 0, 00:12:45.365 "io_queue_requests": 0, 00:12:45.365 "delay_cmd_submit": true, 00:12:45.365 "transport_retry_count": 4, 00:12:45.365 "bdev_retry_count": 3, 00:12:45.365 "transport_ack_timeout": 0, 00:12:45.365 "ctrlr_loss_timeout_sec": 0, 00:12:45.365 "reconnect_delay_sec": 0, 00:12:45.365 "fast_io_fail_timeout_sec": 0, 00:12:45.365 "disable_auto_failback": false, 00:12:45.365 "generate_uuids": false, 00:12:45.365 "transport_tos": 0, 00:12:45.365 "nvme_error_stat": false, 00:12:45.365 "rdma_srq_size": 0, 00:12:45.365 "io_path_stat": false, 00:12:45.365 "allow_accel_sequence": false, 00:12:45.365 "rdma_max_cq_size": 0, 00:12:45.365 "rdma_cm_event_timeout_ms": 0, 00:12:45.365 "dhchap_digests": [ 00:12:45.365 "sha256", 00:12:45.365 "sha384", 00:12:45.365 "sha512" 00:12:45.365 ], 00:12:45.365 "dhchap_dhgroups": [ 00:12:45.365 "null", 00:12:45.365 "ffdhe2048", 00:12:45.365 "ffdhe3072", 00:12:45.365 "ffdhe4096", 00:12:45.365 "ffdhe6144", 00:12:45.365 "ffdhe8192" 00:12:45.365 ] 00:12:45.365 } 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "method": "bdev_nvme_set_hotplug", 00:12:45.366 "params": { 00:12:45.366 "period_us": 100000, 00:12:45.366 "enable": false 00:12:45.366 } 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "method": "bdev_malloc_create", 00:12:45.366 "params": { 00:12:45.366 "name": "malloc0", 00:12:45.366 "num_blocks": 8192, 00:12:45.366 "block_size": 4096, 00:12:45.366 "physical_block_size": 4096, 00:12:45.366 "uuid": "f5e0fbe7-33e1-4c37-ac38-71c7e2fea28f", 00:12:45.366 "optimal_io_boundary": 0 00:12:45.366 } 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "method": "bdev_wait_for_examine" 00:12:45.366 } 00:12:45.366 ] 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "subsystem": "nbd", 00:12:45.366 "config": [] 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "subsystem": "scheduler", 00:12:45.366 "config": [ 00:12:45.366 { 00:12:45.366 "method": "framework_set_scheduler", 00:12:45.366 "params": { 00:12:45.366 "name": "static" 00:12:45.366 } 00:12:45.366 } 00:12:45.366 ] 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "subsystem": "nvmf", 00:12:45.366 "config": [ 00:12:45.366 { 00:12:45.366 "method": "nvmf_set_config", 00:12:45.366 "params": { 00:12:45.366 "discovery_filter": "match_any", 00:12:45.366 "admin_cmd_passthru": { 00:12:45.366 "identify_ctrlr": false 00:12:45.366 } 00:12:45.366 } 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "method": "nvmf_set_max_subsystems", 00:12:45.366 "params": { 00:12:45.366 "max_subsystems": 1024 00:12:45.366 } 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "method": "nvmf_set_crdt", 00:12:45.366 "params": { 00:12:45.366 "crdt1": 0, 00:12:45.366 "crdt2": 0, 00:12:45.366 "crdt3": 0 00:12:45.366 } 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "method": "nvmf_create_transport", 00:12:45.366 "params": { 00:12:45.366 "trtype": "TCP", 00:12:45.366 "max_queue_depth": 128, 00:12:45.366 "max_io_qpairs_per_ctrlr": 127, 00:12:45.366 "in_capsule_data_size": 4096, 00:12:45.366 "max_io_size": 131072, 00:12:45.366 "io_unit_size": 131072, 00:12:45.366 "max_aq_depth": 128, 00:12:45.366 "num_shared_buffers": 511, 00:12:45.366 "buf_cache_size": 4294967295, 00:12:45.366 "dif_insert_or_strip": false, 00:12:45.366 "zcopy": false, 00:12:45.366 "c2h_success": false, 00:12:45.366 "sock_priority": 0, 00:12:45.366 "abort_timeout_sec": 1, 00:12:45.366 "ack_timeout": 0, 00:12:45.366 "data_wr_pool_size": 0 00:12:45.366 } 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "method": "nvmf_create_subsystem", 00:12:45.366 "params": { 00:12:45.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.366 "allow_any_host": false, 00:12:45.366 "serial_number": "SPDK00000000000001", 00:12:45.366 "model_number": "SPDK bdev Controller", 00:12:45.366 "max_namespaces": 10, 00:12:45.366 "min_cntlid": 1, 00:12:45.366 "max_cntlid": 65519, 00:12:45.366 "ana_reporting": false 00:12:45.366 } 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "method": "nvmf_subsystem_add_host", 00:12:45.366 "params": { 00:12:45.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.366 "host": "nqn.2016-06.io.spdk:host1", 00:12:45.366 "psk": "/tmp/tmp.CnXej2IDDC" 00:12:45.366 } 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "method": "nvmf_subsystem_add_ns", 00:12:45.366 "params": { 00:12:45.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.366 "namespace": { 00:12:45.366 "nsid": 1, 00:12:45.366 "bdev_name": "malloc0", 00:12:45.366 "nguid": "F5E0FBE733E14C37AC3871C7E2FEA28F", 00:12:45.366 "uuid": "f5e0fbe7-33e1-4c37-ac38-71c7e2fea28f", 00:12:45.366 "no_auto_visible": false 00:12:45.366 } 00:12:45.366 } 00:12:45.366 }, 00:12:45.366 { 00:12:45.366 "method": "nvmf_subsystem_add_listener", 00:12:45.366 "params": { 00:12:45.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.366 "listen_address": { 00:12:45.366 "trtype": "TCP", 00:12:45.366 "adrfam": "IPv4", 00:12:45.366 "traddr": "10.0.0.2", 00:12:45.366 "trsvcid": "4420" 00:12:45.366 }, 00:12:45.366 "secure_channel": true 00:12:45.366 } 00:12:45.366 } 00:12:45.366 ] 00:12:45.366 } 00:12:45.366 ] 00:12:45.366 }' 00:12:45.366 20:49:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:45.625 20:49:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:12:45.625 "subsystems": [ 00:12:45.626 { 00:12:45.626 "subsystem": "keyring", 00:12:45.626 "config": [] 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "subsystem": "iobuf", 00:12:45.626 "config": [ 00:12:45.626 { 00:12:45.626 "method": "iobuf_set_options", 00:12:45.626 "params": { 00:12:45.626 "small_pool_count": 8192, 00:12:45.626 "large_pool_count": 1024, 00:12:45.626 "small_bufsize": 8192, 00:12:45.626 "large_bufsize": 135168 00:12:45.626 } 00:12:45.626 } 00:12:45.626 ] 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "subsystem": "sock", 00:12:45.626 "config": [ 00:12:45.626 { 00:12:45.626 "method": "sock_set_default_impl", 00:12:45.626 "params": { 00:12:45.626 "impl_name": "uring" 00:12:45.626 } 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "method": "sock_impl_set_options", 00:12:45.626 "params": { 00:12:45.626 "impl_name": "ssl", 00:12:45.626 "recv_buf_size": 4096, 00:12:45.626 "send_buf_size": 4096, 00:12:45.626 "enable_recv_pipe": true, 00:12:45.626 "enable_quickack": false, 00:12:45.626 "enable_placement_id": 0, 00:12:45.626 "enable_zerocopy_send_server": true, 00:12:45.626 "enable_zerocopy_send_client": false, 00:12:45.626 "zerocopy_threshold": 0, 00:12:45.626 "tls_version": 0, 00:12:45.626 "enable_ktls": false 00:12:45.626 } 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "method": "sock_impl_set_options", 00:12:45.626 "params": { 00:12:45.626 "impl_name": "posix", 00:12:45.626 "recv_buf_size": 2097152, 00:12:45.626 "send_buf_size": 2097152, 00:12:45.626 "enable_recv_pipe": true, 00:12:45.626 "enable_quickack": false, 00:12:45.626 "enable_placement_id": 0, 00:12:45.626 "enable_zerocopy_send_server": true, 00:12:45.626 "enable_zerocopy_send_client": false, 00:12:45.626 "zerocopy_threshold": 0, 00:12:45.626 "tls_version": 0, 00:12:45.626 "enable_ktls": false 00:12:45.626 } 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "method": "sock_impl_set_options", 00:12:45.626 "params": { 00:12:45.626 "impl_name": "uring", 00:12:45.626 "recv_buf_size": 2097152, 00:12:45.626 "send_buf_size": 2097152, 00:12:45.626 "enable_recv_pipe": true, 00:12:45.626 "enable_quickack": false, 00:12:45.626 "enable_placement_id": 0, 00:12:45.626 "enable_zerocopy_send_server": false, 00:12:45.626 "enable_zerocopy_send_client": false, 00:12:45.626 "zerocopy_threshold": 0, 00:12:45.626 "tls_version": 0, 00:12:45.626 "enable_ktls": false 00:12:45.626 } 00:12:45.626 } 00:12:45.626 ] 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "subsystem": "vmd", 00:12:45.626 "config": [] 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "subsystem": "accel", 00:12:45.626 "config": [ 00:12:45.626 { 00:12:45.626 "method": "accel_set_options", 00:12:45.626 "params": { 00:12:45.626 "small_cache_size": 128, 00:12:45.626 "large_cache_size": 16, 00:12:45.626 "task_count": 2048, 00:12:45.626 "sequence_count": 2048, 00:12:45.626 "buf_count": 2048 00:12:45.626 } 00:12:45.626 } 00:12:45.626 ] 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "subsystem": "bdev", 00:12:45.626 "config": [ 00:12:45.626 { 00:12:45.626 "method": "bdev_set_options", 00:12:45.626 "params": { 00:12:45.626 "bdev_io_pool_size": 65535, 00:12:45.626 "bdev_io_cache_size": 256, 00:12:45.626 "bdev_auto_examine": true, 00:12:45.626 "iobuf_small_cache_size": 128, 00:12:45.626 "iobuf_large_cache_size": 16 00:12:45.626 } 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "method": "bdev_raid_set_options", 00:12:45.626 "params": { 00:12:45.626 "process_window_size_kb": 1024 00:12:45.626 } 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "method": "bdev_iscsi_set_options", 00:12:45.626 "params": { 00:12:45.626 "timeout_sec": 30 00:12:45.626 } 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "method": "bdev_nvme_set_options", 00:12:45.626 "params": { 00:12:45.626 "action_on_timeout": "none", 00:12:45.626 "timeout_us": 0, 00:12:45.626 "timeout_admin_us": 0, 00:12:45.626 "keep_alive_timeout_ms": 10000, 00:12:45.626 "arbitration_burst": 0, 00:12:45.626 "low_priority_weight": 0, 00:12:45.626 "medium_priority_weight": 0, 00:12:45.626 "high_priority_weight": 0, 00:12:45.626 "nvme_adminq_poll_period_us": 10000, 00:12:45.626 "nvme_ioq_poll_period_us": 0, 00:12:45.626 "io_queue_requests": 512, 00:12:45.626 "delay_cmd_submit": true, 00:12:45.626 "transport_retry_count": 4, 00:12:45.626 "bdev_retry_count": 3, 00:12:45.626 "transport_ack_timeout": 0, 00:12:45.626 "ctrlr_loss_timeout_sec": 0, 00:12:45.626 "reconnect_delay_sec": 0, 00:12:45.626 "fast_io_fail_timeout_sec": 0, 00:12:45.626 "disable_auto_failback": false, 00:12:45.626 "generate_uuids": false, 00:12:45.626 "transport_tos": 0, 00:12:45.626 "nvme_error_stat": false, 00:12:45.626 "rdma_srq_size": 0, 00:12:45.626 "io_path_stat": false, 00:12:45.626 "allow_accel_sequence": false, 00:12:45.626 "rdma_max_cq_size": 0, 00:12:45.626 "rdma_cm_event_timeout_ms": 0, 00:12:45.626 "dhchap_digests": [ 00:12:45.626 "sha256", 00:12:45.626 "sha384", 00:12:45.626 "sha512" 00:12:45.626 ], 00:12:45.626 "dhchap_dhgroups": [ 00:12:45.626 "null", 00:12:45.626 "ffdhe2048", 00:12:45.626 "ffdhe3072", 00:12:45.626 "ffdhe4096", 00:12:45.626 "ffdhe6144", 00:12:45.626 "ffdhe8192" 00:12:45.626 ] 00:12:45.626 } 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "method": "bdev_nvme_attach_controller", 00:12:45.626 "params": { 00:12:45.626 "name": "TLSTEST", 00:12:45.626 "trtype": "TCP", 00:12:45.626 "adrfam": "IPv4", 00:12:45.626 "traddr": "10.0.0.2", 00:12:45.626 "trsvcid": "4420", 00:12:45.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.626 "prchk_reftag": false, 00:12:45.626 "prchk_guard": false, 00:12:45.626 "ctrlr_loss_timeout_sec": 0, 00:12:45.626 "reconnect_delay_sec": 0, 00:12:45.626 "fast_io_fail_timeout_sec": 0, 00:12:45.626 "psk": "/tmp/tmp.CnXej2IDDC", 00:12:45.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:45.626 "hdgst": false, 00:12:45.626 "ddgst": false 00:12:45.626 } 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "method": "bdev_nvme_set_hotplug", 00:12:45.626 "params": { 00:12:45.626 "period_us": 100000, 00:12:45.626 "enable": false 00:12:45.626 } 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "method": "bdev_wait_for_examine" 00:12:45.626 } 00:12:45.626 ] 00:12:45.626 }, 00:12:45.626 { 00:12:45.626 "subsystem": "nbd", 00:12:45.626 "config": [] 00:12:45.626 } 00:12:45.626 ] 00:12:45.626 }' 00:12:45.626 20:49:07 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73099 00:12:45.626 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73099 ']' 00:12:45.626 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73099 00:12:45.626 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:45.626 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:45.626 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73099 00:12:45.626 killing process with pid 73099 00:12:45.626 Received shutdown signal, test time was about 10.000000 seconds 00:12:45.626 00:12:45.626 Latency(us) 00:12:45.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.626 =================================================================================================================== 00:12:45.626 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:45.626 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:45.626 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:45.626 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73099' 00:12:45.626 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73099 00:12:45.626 [2024-07-15 20:49:07.441580] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:45.626 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73099 00:12:45.886 20:49:07 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73047 00:12:45.886 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73047 ']' 00:12:45.886 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73047 00:12:45.886 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:45.886 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:45.886 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73047 00:12:45.886 killing process with pid 73047 00:12:45.886 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:45.886 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:45.886 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73047' 00:12:45.886 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73047 00:12:45.886 [2024-07-15 20:49:07.664764] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:45.886 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73047 00:12:46.146 20:49:07 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:12:46.146 "subsystems": [ 00:12:46.146 { 00:12:46.146 "subsystem": "keyring", 00:12:46.146 "config": [] 00:12:46.146 }, 00:12:46.146 { 00:12:46.146 "subsystem": "iobuf", 00:12:46.146 "config": [ 00:12:46.146 { 00:12:46.146 "method": "iobuf_set_options", 00:12:46.146 "params": { 00:12:46.146 "small_pool_count": 8192, 00:12:46.146 "large_pool_count": 1024, 00:12:46.146 "small_bufsize": 8192, 00:12:46.146 "large_bufsize": 135168 00:12:46.146 } 00:12:46.146 } 00:12:46.146 ] 00:12:46.146 }, 00:12:46.146 { 00:12:46.146 "subsystem": "sock", 00:12:46.146 "config": [ 00:12:46.146 { 00:12:46.146 "method": "sock_set_default_impl", 00:12:46.146 "params": { 00:12:46.146 "impl_name": "uring" 00:12:46.146 } 00:12:46.146 }, 00:12:46.146 { 00:12:46.146 "method": "sock_impl_set_options", 00:12:46.146 "params": { 00:12:46.146 "impl_name": "ssl", 00:12:46.146 "recv_buf_size": 4096, 00:12:46.146 "send_buf_size": 4096, 00:12:46.146 "enable_recv_pipe": true, 00:12:46.146 "enable_quickack": false, 00:12:46.146 "enable_placement_id": 0, 00:12:46.146 "enable_zerocopy_send_server": true, 00:12:46.146 "enable_zerocopy_send_client": false, 00:12:46.146 "zerocopy_threshold": 0, 00:12:46.146 "tls_version": 0, 00:12:46.146 "enable_ktls": false 00:12:46.146 } 00:12:46.146 }, 00:12:46.146 { 00:12:46.146 "method": "sock_impl_set_options", 00:12:46.146 "params": { 00:12:46.146 "impl_name": "posix", 00:12:46.146 "recv_buf_size": 2097152, 00:12:46.146 "send_buf_size": 2097152, 00:12:46.146 "enable_recv_pipe": true, 00:12:46.146 "enable_quickack": false, 00:12:46.146 "enable_placement_id": 0, 00:12:46.146 "enable_zerocopy_send_server": true, 00:12:46.146 "enable_zerocopy_send_client": false, 00:12:46.146 "zerocopy_threshold": 0, 00:12:46.146 "tls_version": 0, 00:12:46.146 "enable_ktls": false 00:12:46.146 } 00:12:46.146 }, 00:12:46.146 { 00:12:46.146 "method": "sock_impl_set_options", 00:12:46.146 "params": { 00:12:46.146 "impl_name": "uring", 00:12:46.146 "recv_buf_size": 2097152, 00:12:46.146 "send_buf_size": 2097152, 00:12:46.146 "enable_recv_pipe": true, 00:12:46.146 "enable_quickack": false, 00:12:46.146 "enable_placement_id": 0, 00:12:46.146 "enable_zerocopy_send_server": false, 00:12:46.146 "enable_zerocopy_send_client": false, 00:12:46.146 "zerocopy_threshold": 0, 00:12:46.146 "tls_version": 0, 00:12:46.146 "enable_ktls": false 00:12:46.146 } 00:12:46.146 } 00:12:46.146 ] 00:12:46.146 }, 00:12:46.146 { 00:12:46.146 "subsystem": "vmd", 00:12:46.146 "config": [] 00:12:46.146 }, 00:12:46.146 { 00:12:46.146 "subsystem": "accel", 00:12:46.146 "config": [ 00:12:46.146 { 00:12:46.146 "method": "accel_set_options", 00:12:46.146 "params": { 00:12:46.146 "small_cache_size": 128, 00:12:46.146 "large_cache_size": 16, 00:12:46.146 "task_count": 2048, 00:12:46.146 "sequence_count": 2048, 00:12:46.146 "buf_count": 2048 00:12:46.146 } 00:12:46.146 } 00:12:46.146 ] 00:12:46.146 }, 00:12:46.146 { 00:12:46.146 "subsystem": "bdev", 00:12:46.146 "config": [ 00:12:46.146 { 00:12:46.146 "method": "bdev_set_options", 00:12:46.147 "params": { 00:12:46.147 "bdev_io_pool_size": 65535, 00:12:46.147 "bdev_io_cache_size": 256, 00:12:46.147 "bdev_auto_examine": true, 00:12:46.147 "iobuf_small_cache_size": 128, 00:12:46.147 "iobuf_large_cache_size": 16 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "bdev_raid_set_options", 00:12:46.147 "params": { 00:12:46.147 "process_window_size_kb": 1024 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "bdev_iscsi_set_options", 00:12:46.147 "params": { 00:12:46.147 "timeout_sec": 30 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "bdev_nvme_set_options", 00:12:46.147 "params": { 00:12:46.147 "action_on_timeout": "none", 00:12:46.147 "timeout_us": 0, 00:12:46.147 "timeout_admin_us": 0, 00:12:46.147 "keep_alive_timeout_ms": 10000, 00:12:46.147 "arbitration_burst": 0, 00:12:46.147 "low_priority_weight": 0, 00:12:46.147 "medium_priority_weight": 0, 00:12:46.147 "high_priority_weight": 0, 00:12:46.147 "nvme_adminq_poll_period_us": 10000, 00:12:46.147 "nvme_ioq_poll_period_us": 0, 00:12:46.147 "io_queue_requests": 0, 00:12:46.147 "delay_cmd_submit": true, 00:12:46.147 "transport_retry_count": 4, 00:12:46.147 "bdev_retry_count": 3, 00:12:46.147 "transport_ack_timeout": 0, 00:12:46.147 "ctrlr_loss_timeout_sec": 0, 00:12:46.147 "reconnect_delay_sec": 0, 00:12:46.147 "fast_io_fail_timeout_sec": 0, 00:12:46.147 "disable_auto_failback": false, 00:12:46.147 "generate_uuids": false, 00:12:46.147 "transport_tos": 0, 00:12:46.147 "nvme_error_stat": false, 00:12:46.147 "rdma_srq_size": 0, 00:12:46.147 "io_path_stat": false, 00:12:46.147 "allow_accel_sequence": false, 00:12:46.147 "rdma_max_cq_size": 0, 00:12:46.147 "rdma_cm_event_timeout_ms": 0, 00:12:46.147 "dhchap_digests": [ 00:12:46.147 "sha256", 00:12:46.147 "sha384", 00:12:46.147 "sha512" 00:12:46.147 ], 00:12:46.147 "dhchap_dhgroups": [ 00:12:46.147 "null", 00:12:46.147 "ffdhe2048", 00:12:46.147 "ffdhe3072", 00:12:46.147 "ffdhe4096", 00:12:46.147 "ffdhe6144", 00:12:46.147 "ffdhe8192" 00:12:46.147 ] 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "bdev_nvme_set_hotplug", 00:12:46.147 "params": { 00:12:46.147 "period_us": 100000, 00:12:46.147 "enable": false 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "bdev_malloc_create", 00:12:46.147 "params": { 00:12:46.147 "name": "malloc0", 00:12:46.147 "num_blocks": 8192, 00:12:46.147 "block_size": 4096, 00:12:46.147 "physical_block_size": 4096, 00:12:46.147 "uuid": "f5e0fbe7-33e1-4c37-ac38-71c7e2fea28f", 00:12:46.147 "optimal_io_boundary": 0 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "bdev_wait_for_examine" 00:12:46.147 } 00:12:46.147 ] 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "subsystem": "nbd", 00:12:46.147 "config": [] 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "subsystem": "scheduler", 00:12:46.147 "config": [ 00:12:46.147 { 00:12:46.147 "method": "framework_set_scheduler", 00:12:46.147 "params": { 00:12:46.147 "name": "static" 00:12:46.147 } 00:12:46.147 } 00:12:46.147 ] 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "subsystem": "nvmf", 00:12:46.147 "config": [ 00:12:46.147 { 00:12:46.147 "method": "nvmf_set_config", 00:12:46.147 "params": { 00:12:46.147 "discovery_filter": "match_any", 00:12:46.147 "admin_cmd_passthru": { 00:12:46.147 "identify_ctrlr": false 00:12:46.147 } 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "nvmf_set_max_subsystems", 00:12:46.147 "params": { 00:12:46.147 "max_subsystems": 1024 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "nvmf_set_crdt", 00:12:46.147 "params": { 00:12:46.147 "crdt1": 0, 00:12:46.147 "crdt2": 0, 00:12:46.147 "crdt3": 0 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "nvmf_create_transport", 00:12:46.147 "params": { 00:12:46.147 "trtype": "TCP", 00:12:46.147 "max_queue_depth": 128, 00:12:46.147 "max_io_qpairs_per_ctrlr": 127, 00:12:46.147 "in_capsule_data_size": 4096, 00:12:46.147 "max_io_size": 131072, 00:12:46.147 "io_unit_size": 131072, 00:12:46.147 "max_aq_depth": 128, 00:12:46.147 "num_shared_buffers": 511, 00:12:46.147 "buf_cache_size": 4294967295, 00:12:46.147 "dif_insert_or_strip": false, 00:12:46.147 "zcopy": false, 00:12:46.147 "c2h_success": false, 00:12:46.147 "sock_priority": 0, 00:12:46.147 "abort_timeout_sec": 1, 00:12:46.147 "ack_timeout": 0, 00:12:46.147 "data_wr_pool_size": 0 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "nvmf_create_subsystem", 00:12:46.147 "params": { 00:12:46.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.147 "allow_any_host": false, 00:12:46.147 "serial_number": "SPDK00000000000001", 00:12:46.147 "model_number": "SPDK bdev Controller", 00:12:46.147 "max_namespaces": 10, 00:12:46.147 "min_cntlid": 1, 00:12:46.147 "max_cntlid": 65519, 00:12:46.147 "ana_reporting": false 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "nvmf_subsystem_add_host", 00:12:46.147 "params": { 00:12:46.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.147 "host": "nqn.2016-06.io.spdk:host1", 00:12:46.147 "psk": "/tmp/tmp.CnXej2IDDC" 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "nvmf_subsystem_add_ns", 00:12:46.147 "params": { 00:12:46.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.147 "namespace": { 00:12:46.147 "nsid": 1, 00:12:46.147 "bdev_name": "malloc0", 00:12:46.147 "nguid": "F5E0FBE733E14C37AC3871C7E2FEA28F", 00:12:46.147 "uuid": "f5e0fbe7-33e1-4c37-ac38-71c7e2fea28f", 00:12:46.147 "no_auto_visible": false 00:12:46.147 } 00:12:46.147 } 00:12:46.147 }, 00:12:46.147 { 00:12:46.147 "method": "nvmf_subsystem_add_listener", 00:12:46.147 "params": { 00:12:46.147 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.147 "listen_address": { 00:12:46.147 "trtype": "TCP", 00:12:46.147 "adrfam": "IPv4", 00:12:46.147 "traddr": "10.0.0.2", 00:12:46.147 "trsvcid": "4420" 00:12:46.147 }, 00:12:46.147 "secure_channel": true 00:12:46.147 } 00:12:46.147 } 00:12:46.147 ] 00:12:46.147 } 00:12:46.147 ] 00:12:46.147 }' 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73142 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73142 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73142 ']' 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.147 20:49:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:46.147 [2024-07-15 20:49:07.921654] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:46.147 [2024-07-15 20:49:07.921723] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.147 [2024-07-15 20:49:08.051373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.407 [2024-07-15 20:49:08.139456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.407 [2024-07-15 20:49:08.139508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.407 [2024-07-15 20:49:08.139518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.407 [2024-07-15 20:49:08.139526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.407 [2024-07-15 20:49:08.139532] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.407 [2024-07-15 20:49:08.139628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.407 [2024-07-15 20:49:08.293708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:46.666 [2024-07-15 20:49:08.352276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.666 [2024-07-15 20:49:08.368184] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:46.666 [2024-07-15 20:49:08.384153] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:46.666 [2024-07-15 20:49:08.384320] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73169 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73169 /var/tmp/bdevperf.sock 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73169 ']' 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:46.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:46.925 20:49:08 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:12:46.925 "subsystems": [ 00:12:46.925 { 00:12:46.925 "subsystem": "keyring", 00:12:46.925 "config": [] 00:12:46.925 }, 00:12:46.925 { 00:12:46.925 "subsystem": "iobuf", 00:12:46.925 "config": [ 00:12:46.925 { 00:12:46.925 "method": "iobuf_set_options", 00:12:46.925 "params": { 00:12:46.925 "small_pool_count": 8192, 00:12:46.925 "large_pool_count": 1024, 00:12:46.925 "small_bufsize": 8192, 00:12:46.925 "large_bufsize": 135168 00:12:46.925 } 00:12:46.925 } 00:12:46.925 ] 00:12:46.925 }, 00:12:46.925 { 00:12:46.925 "subsystem": "sock", 00:12:46.925 "config": [ 00:12:46.925 { 00:12:46.925 "method": "sock_set_default_impl", 00:12:46.925 "params": { 00:12:46.925 "impl_name": "uring" 00:12:46.925 } 00:12:46.925 }, 00:12:46.925 { 00:12:46.925 "method": "sock_impl_set_options", 00:12:46.925 "params": { 00:12:46.925 "impl_name": "ssl", 00:12:46.925 "recv_buf_size": 4096, 00:12:46.925 "send_buf_size": 4096, 00:12:46.925 "enable_recv_pipe": true, 00:12:46.925 "enable_quickack": false, 00:12:46.925 "enable_placement_id": 0, 00:12:46.925 "enable_zerocopy_send_server": true, 00:12:46.925 "enable_zerocopy_send_client": false, 00:12:46.925 "zerocopy_threshold": 0, 00:12:46.925 "tls_version": 0, 00:12:46.925 "enable_ktls": false 00:12:46.925 } 00:12:46.925 }, 00:12:46.925 { 00:12:46.925 "method": "sock_impl_set_options", 00:12:46.925 "params": { 00:12:46.925 "impl_name": "posix", 00:12:46.925 "recv_buf_size": 2097152, 00:12:46.925 "send_buf_size": 2097152, 00:12:46.925 "enable_recv_pipe": true, 00:12:46.925 "enable_quickack": false, 00:12:46.925 "enable_placement_id": 0, 00:12:46.925 "enable_zerocopy_send_server": true, 00:12:46.925 "enable_zerocopy_send_client": false, 00:12:46.925 "zerocopy_threshold": 0, 00:12:46.925 "tls_version": 0, 00:12:46.925 "enable_ktls": false 00:12:46.925 } 00:12:46.925 }, 00:12:46.925 { 00:12:46.925 "method": "sock_impl_set_options", 00:12:46.925 "params": { 00:12:46.925 "impl_name": "uring", 00:12:46.925 "recv_buf_size": 2097152, 00:12:46.925 "send_buf_size": 2097152, 00:12:46.925 "enable_recv_pipe": true, 00:12:46.925 "enable_quickack": false, 00:12:46.925 "enable_placement_id": 0, 00:12:46.925 "enable_zerocopy_send_server": false, 00:12:46.925 "enable_zerocopy_send_client": false, 00:12:46.925 "zerocopy_threshold": 0, 00:12:46.925 "tls_version": 0, 00:12:46.925 "enable_ktls": false 00:12:46.925 } 00:12:46.925 } 00:12:46.925 ] 00:12:46.925 }, 00:12:46.925 { 00:12:46.925 "subsystem": "vmd", 00:12:46.925 "config": [] 00:12:46.925 }, 00:12:46.925 { 00:12:46.925 "subsystem": "accel", 00:12:46.925 "config": [ 00:12:46.925 { 00:12:46.925 "method": "accel_set_options", 00:12:46.925 "params": { 00:12:46.925 "small_cache_size": 128, 00:12:46.925 "large_cache_size": 16, 00:12:46.925 "task_count": 2048, 00:12:46.925 "sequence_count": 2048, 00:12:46.925 "buf_count": 2048 00:12:46.925 } 00:12:46.925 } 00:12:46.925 ] 00:12:46.925 }, 00:12:46.925 { 00:12:46.925 "subsystem": "bdev", 00:12:46.925 "config": [ 00:12:46.925 { 00:12:46.925 "method": "bdev_set_options", 00:12:46.925 "params": { 00:12:46.926 "bdev_io_pool_size": 65535, 00:12:46.926 "bdev_io_cache_size": 256, 00:12:46.926 "bdev_auto_examine": true, 00:12:46.926 "iobuf_small_cache_size": 128, 00:12:46.926 "iobuf_large_cache_size": 16 00:12:46.926 } 00:12:46.926 }, 00:12:46.926 { 00:12:46.926 "method": "bdev_raid_set_options", 00:12:46.926 "params": { 00:12:46.926 "process_window_size_kb": 1024 00:12:46.926 } 00:12:46.926 }, 00:12:46.926 { 00:12:46.926 "method": "bdev_iscsi_set_options", 00:12:46.926 "params": { 00:12:46.926 "timeout_sec": 30 00:12:46.926 } 00:12:46.926 }, 00:12:46.926 { 00:12:46.926 "method": "bdev_nvme_set_options", 00:12:46.926 "params": { 00:12:46.926 "action_on_timeout": "none", 00:12:46.926 "timeout_us": 0, 00:12:46.926 "timeout_admin_us": 0, 00:12:46.926 "keep_alive_timeout_ms": 10000, 00:12:46.926 "arbitration_burst": 0, 00:12:46.926 "low_priority_weight": 0, 00:12:46.926 "medium_priority_weight": 0, 00:12:46.926 "high_priority_weight": 0, 00:12:46.926 "nvme_adminq_poll_period_us": 10000, 00:12:46.926 "nvme_ioq_poll_period_us": 0, 00:12:46.926 "io_queue_requests": 512, 00:12:46.926 "delay_cmd_submit": true, 00:12:46.926 "transport_retry_count": 4, 00:12:46.926 "bdev_retry_count": 3, 00:12:46.926 "transport_ack_timeout": 0, 00:12:46.926 "ctrlr_loss_timeout_sec": 0, 00:12:46.926 "reconnect_delay_sec": 0, 00:12:46.926 "fast_io_fail_timeout_sec": 0, 00:12:46.926 "disable_auto_failback": false, 00:12:46.926 "generate_uuids": false, 00:12:46.926 "transport_tos": 0, 00:12:46.926 "nvme_error_stat": false, 00:12:46.926 "rdma_srq_size": 0, 00:12:46.926 "io_path_stat": false, 00:12:46.926 "allow_accel_sequence": false, 00:12:46.926 "rdma_max_cq_size": 0, 00:12:46.926 "rdma_cm_event_timeout_ms": 0, 00:12:46.926 "dhchap_digests": [ 00:12:46.926 "sha256", 00:12:46.926 "sha384", 00:12:46.926 "sha512" 00:12:46.926 ], 00:12:46.926 "dhchap_dhgroups": [ 00:12:46.926 "null", 00:12:46.926 "ffdhe2048", 00:12:46.926 "ffdhe3072", 00:12:46.926 "ffdhe4096", 00:12:46.926 "ffdhe6144", 00:12:46.926 "ffdhe8192" 00:12:46.926 ] 00:12:46.926 } 00:12:46.926 }, 00:12:46.926 { 00:12:46.926 "method": "bdev_nvme_attach_controller", 00:12:46.926 "params": { 00:12:46.926 "name": "TLSTEST", 00:12:46.926 "trtype": "TCP", 00:12:46.926 "adrfam": "IPv4", 00:12:46.926 "traddr": "10.0.0.2", 00:12:46.926 "trsvcid": "4420", 00:12:46.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.926 "prchk_reftag": false, 00:12:46.926 "prchk_guard": false, 00:12:46.926 "ctrlr_loss_timeout_sec": 0, 00:12:46.926 "reconnect_delay_sec": 0, 00:12:46.926 "fast_io_fail_timeout_sec": 0, 00:12:46.926 "psk": "/tmp/tmp.CnXej2IDDC", 00:12:46.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:46.926 "hdgst": false, 00:12:46.926 "ddgst": false 00:12:46.926 } 00:12:46.926 }, 00:12:46.926 { 00:12:46.926 "method": "bdev_nvme_set_hotplug", 00:12:46.926 "params": { 00:12:46.926 "period_us": 100000, 00:12:46.926 "enable": false 00:12:46.926 } 00:12:46.926 }, 00:12:46.926 { 00:12:46.926 "method": "bdev_wait_for_examine" 00:12:46.926 } 00:12:46.926 ] 00:12:46.926 }, 00:12:46.926 { 00:12:46.926 "subsystem": "nbd", 00:12:46.926 "config": [] 00:12:46.926 } 00:12:46.926 ] 00:12:46.926 }' 00:12:46.926 20:49:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.926 20:49:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:47.184 [2024-07-15 20:49:08.842442] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:47.184 [2024-07-15 20:49:08.842628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73169 ] 00:12:47.184 [2024-07-15 20:49:08.969790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.184 [2024-07-15 20:49:09.057374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.443 [2024-07-15 20:49:09.179849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:47.443 [2024-07-15 20:49:09.210700] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:47.443 [2024-07-15 20:49:09.210989] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:48.010 20:49:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.010 20:49:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:48.010 20:49:09 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:48.010 Running I/O for 10 seconds... 00:12:57.990 00:12:57.990 Latency(us) 00:12:57.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.990 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:57.990 Verification LBA range: start 0x0 length 0x2000 00:12:57.990 TLSTESTn1 : 10.01 5831.24 22.78 0.00 0.00 21916.78 4263.79 17581.55 00:12:57.990 =================================================================================================================== 00:12:57.990 Total : 5831.24 22.78 0.00 0.00 21916.78 4263.79 17581.55 00:12:57.990 0 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73169 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73169 ']' 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73169 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73169 00:12:57.990 killing process with pid 73169 00:12:57.990 Received shutdown signal, test time was about 10.000000 seconds 00:12:57.990 00:12:57.990 Latency(us) 00:12:57.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.990 =================================================================================================================== 00:12:57.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73169' 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73169 00:12:57.990 [2024-07-15 20:49:19.841408] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:57.990 20:49:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73169 00:12:58.258 20:49:20 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73142 00:12:58.258 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73142 ']' 00:12:58.258 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73142 00:12:58.258 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:58.258 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.258 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73142 00:12:58.258 killing process with pid 73142 00:12:58.258 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:58.258 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:58.258 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73142' 00:12:58.258 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73142 00:12:58.258 [2024-07-15 20:49:20.066584] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:58.258 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73142 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73308 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73308 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73308 ']' 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.522 20:49:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.522 [2024-07-15 20:49:20.318808] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:12:58.522 [2024-07-15 20:49:20.318870] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.781 [2024-07-15 20:49:20.459434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.781 [2024-07-15 20:49:20.549461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.781 [2024-07-15 20:49:20.549510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.781 [2024-07-15 20:49:20.549519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.781 [2024-07-15 20:49:20.549527] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.781 [2024-07-15 20:49:20.549535] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.781 [2024-07-15 20:49:20.549562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.781 [2024-07-15 20:49:20.589981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:59.349 20:49:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.349 20:49:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:59.349 20:49:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.349 20:49:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:59.349 20:49:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.349 20:49:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.349 20:49:21 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.CnXej2IDDC 00:12:59.349 20:49:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.CnXej2IDDC 00:12:59.349 20:49:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:59.608 [2024-07-15 20:49:21.370777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.608 20:49:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:59.866 20:49:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:59.866 [2024-07-15 20:49:21.746232] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:59.866 [2024-07-15 20:49:21.746421] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.866 20:49:21 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:00.125 malloc0 00:13:00.125 20:49:21 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:00.383 20:49:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CnXej2IDDC 00:13:00.641 [2024-07-15 20:49:22.298139] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:00.641 20:49:22 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73355 00:13:00.641 20:49:22 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:00.641 20:49:22 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:00.641 20:49:22 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73355 /var/tmp/bdevperf.sock 00:13:00.641 20:49:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73355 ']' 00:13:00.641 20:49:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:00.641 20:49:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:00.641 20:49:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:00.641 20:49:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.641 20:49:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:00.641 [2024-07-15 20:49:22.363410] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:00.641 [2024-07-15 20:49:22.363473] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73355 ] 00:13:00.641 [2024-07-15 20:49:22.500740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.899 [2024-07-15 20:49:22.579115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.899 [2024-07-15 20:49:22.620340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:01.474 20:49:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.474 20:49:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:01.474 20:49:23 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CnXej2IDDC 00:13:01.734 20:49:23 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:01.735 [2024-07-15 20:49:23.560617] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:01.735 nvme0n1 00:13:01.992 20:49:23 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:01.992 Running I/O for 1 seconds... 00:13:02.924 00:13:02.924 Latency(us) 00:13:02.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.924 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:02.924 Verification LBA range: start 0x0 length 0x2000 00:13:02.924 nvme0n1 : 1.01 5877.35 22.96 0.00 0.00 21623.62 4526.98 16528.76 00:13:02.924 =================================================================================================================== 00:13:02.924 Total : 5877.35 22.96 0.00 0.00 21623.62 4526.98 16528.76 00:13:02.924 0 00:13:02.924 20:49:24 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 73355 00:13:02.924 20:49:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73355 ']' 00:13:02.924 20:49:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73355 00:13:02.924 20:49:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:02.924 20:49:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.924 20:49:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73355 00:13:02.924 killing process with pid 73355 00:13:02.924 Received shutdown signal, test time was about 1.000000 seconds 00:13:02.924 00:13:02.924 Latency(us) 00:13:02.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.924 =================================================================================================================== 00:13:02.924 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:02.924 20:49:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:02.924 20:49:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:02.924 20:49:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73355' 00:13:02.924 20:49:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73355 00:13:02.924 20:49:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73355 00:13:03.181 20:49:25 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73308 00:13:03.181 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73308 ']' 00:13:03.181 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73308 00:13:03.181 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:03.181 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:03.181 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73308 00:13:03.181 killing process with pid 73308 00:13:03.181 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:03.181 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:03.181 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73308' 00:13:03.181 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73308 00:13:03.181 [2024-07-15 20:49:25.040773] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:03.181 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73308 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73402 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73402 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73402 ']' 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.438 20:49:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.438 [2024-07-15 20:49:25.288303] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:03.438 [2024-07-15 20:49:25.288365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.696 [2024-07-15 20:49:25.426356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.696 [2024-07-15 20:49:25.502292] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.696 [2024-07-15 20:49:25.502343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.696 [2024-07-15 20:49:25.502353] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.696 [2024-07-15 20:49:25.502360] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.696 [2024-07-15 20:49:25.502367] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.696 [2024-07-15 20:49:25.502391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.696 [2024-07-15 20:49:25.542895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:04.287 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:04.287 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:04.287 20:49:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:04.287 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:04.287 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:04.287 20:49:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.287 20:49:26 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:13:04.287 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.287 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:04.287 [2024-07-15 20:49:26.179783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.545 malloc0 00:13:04.545 [2024-07-15 20:49:26.208333] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:04.545 [2024-07-15 20:49:26.208730] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.545 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.545 20:49:26 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=73434 00:13:04.545 20:49:26 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:04.545 20:49:26 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 73434 /var/tmp/bdevperf.sock 00:13:04.545 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73434 ']' 00:13:04.545 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:04.545 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:04.545 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:04.545 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.545 20:49:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:04.545 [2024-07-15 20:49:26.287589] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:04.545 [2024-07-15 20:49:26.287650] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73434 ] 00:13:04.545 [2024-07-15 20:49:26.422867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.802 [2024-07-15 20:49:26.510157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.802 [2024-07-15 20:49:26.551393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:05.385 20:49:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.385 20:49:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:05.385 20:49:27 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CnXej2IDDC 00:13:05.688 20:49:27 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:05.688 [2024-07-15 20:49:27.439718] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:05.688 nvme0n1 00:13:05.688 20:49:27 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:05.946 Running I/O for 1 seconds... 00:13:06.881 00:13:06.881 Latency(us) 00:13:06.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.881 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:06.881 Verification LBA range: start 0x0 length 0x2000 00:13:06.881 nvme0n1 : 1.01 5922.15 23.13 0.00 0.00 21458.88 4369.07 16844.59 00:13:06.881 =================================================================================================================== 00:13:06.881 Total : 5922.15 23.13 0.00 0.00 21458.88 4369.07 16844.59 00:13:06.881 0 00:13:06.881 20:49:28 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:13:06.881 20:49:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.881 20:49:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:06.881 20:49:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.881 20:49:28 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:13:06.881 "subsystems": [ 00:13:06.881 { 00:13:06.881 "subsystem": "keyring", 00:13:06.881 "config": [ 00:13:06.881 { 00:13:06.881 "method": "keyring_file_add_key", 00:13:06.881 "params": { 00:13:06.881 "name": "key0", 00:13:06.881 "path": "/tmp/tmp.CnXej2IDDC" 00:13:06.881 } 00:13:06.881 } 00:13:06.881 ] 00:13:06.881 }, 00:13:06.881 { 00:13:06.881 "subsystem": "iobuf", 00:13:06.881 "config": [ 00:13:06.881 { 00:13:06.881 "method": "iobuf_set_options", 00:13:06.881 "params": { 00:13:06.881 "small_pool_count": 8192, 00:13:06.881 "large_pool_count": 1024, 00:13:06.881 "small_bufsize": 8192, 00:13:06.881 "large_bufsize": 135168 00:13:06.881 } 00:13:06.881 } 00:13:06.881 ] 00:13:06.881 }, 00:13:06.881 { 00:13:06.881 "subsystem": "sock", 00:13:06.881 "config": [ 00:13:06.881 { 00:13:06.881 "method": "sock_set_default_impl", 00:13:06.881 "params": { 00:13:06.881 "impl_name": "uring" 00:13:06.881 } 00:13:06.881 }, 00:13:06.881 { 00:13:06.881 "method": "sock_impl_set_options", 00:13:06.881 "params": { 00:13:06.881 "impl_name": "ssl", 00:13:06.881 "recv_buf_size": 4096, 00:13:06.881 "send_buf_size": 4096, 00:13:06.881 "enable_recv_pipe": true, 00:13:06.881 "enable_quickack": false, 00:13:06.881 "enable_placement_id": 0, 00:13:06.881 "enable_zerocopy_send_server": true, 00:13:06.881 "enable_zerocopy_send_client": false, 00:13:06.881 "zerocopy_threshold": 0, 00:13:06.881 "tls_version": 0, 00:13:06.881 "enable_ktls": false 00:13:06.881 } 00:13:06.881 }, 00:13:06.881 { 00:13:06.881 "method": "sock_impl_set_options", 00:13:06.881 "params": { 00:13:06.881 "impl_name": "posix", 00:13:06.881 "recv_buf_size": 2097152, 00:13:06.881 "send_buf_size": 2097152, 00:13:06.881 "enable_recv_pipe": true, 00:13:06.881 "enable_quickack": false, 00:13:06.881 "enable_placement_id": 0, 00:13:06.881 "enable_zerocopy_send_server": true, 00:13:06.881 "enable_zerocopy_send_client": false, 00:13:06.881 "zerocopy_threshold": 0, 00:13:06.881 "tls_version": 0, 00:13:06.881 "enable_ktls": false 00:13:06.881 } 00:13:06.881 }, 00:13:06.881 { 00:13:06.881 "method": "sock_impl_set_options", 00:13:06.881 "params": { 00:13:06.881 "impl_name": "uring", 00:13:06.881 "recv_buf_size": 2097152, 00:13:06.881 "send_buf_size": 2097152, 00:13:06.881 "enable_recv_pipe": true, 00:13:06.881 "enable_quickack": false, 00:13:06.881 "enable_placement_id": 0, 00:13:06.882 "enable_zerocopy_send_server": false, 00:13:06.882 "enable_zerocopy_send_client": false, 00:13:06.882 "zerocopy_threshold": 0, 00:13:06.882 "tls_version": 0, 00:13:06.882 "enable_ktls": false 00:13:06.882 } 00:13:06.882 } 00:13:06.882 ] 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "subsystem": "vmd", 00:13:06.882 "config": [] 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "subsystem": "accel", 00:13:06.882 "config": [ 00:13:06.882 { 00:13:06.882 "method": "accel_set_options", 00:13:06.882 "params": { 00:13:06.882 "small_cache_size": 128, 00:13:06.882 "large_cache_size": 16, 00:13:06.882 "task_count": 2048, 00:13:06.882 "sequence_count": 2048, 00:13:06.882 "buf_count": 2048 00:13:06.882 } 00:13:06.882 } 00:13:06.882 ] 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "subsystem": "bdev", 00:13:06.882 "config": [ 00:13:06.882 { 00:13:06.882 "method": "bdev_set_options", 00:13:06.882 "params": { 00:13:06.882 "bdev_io_pool_size": 65535, 00:13:06.882 "bdev_io_cache_size": 256, 00:13:06.882 "bdev_auto_examine": true, 00:13:06.882 "iobuf_small_cache_size": 128, 00:13:06.882 "iobuf_large_cache_size": 16 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "method": "bdev_raid_set_options", 00:13:06.882 "params": { 00:13:06.882 "process_window_size_kb": 1024 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "method": "bdev_iscsi_set_options", 00:13:06.882 "params": { 00:13:06.882 "timeout_sec": 30 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "method": "bdev_nvme_set_options", 00:13:06.882 "params": { 00:13:06.882 "action_on_timeout": "none", 00:13:06.882 "timeout_us": 0, 00:13:06.882 "timeout_admin_us": 0, 00:13:06.882 "keep_alive_timeout_ms": 10000, 00:13:06.882 "arbitration_burst": 0, 00:13:06.882 "low_priority_weight": 0, 00:13:06.882 "medium_priority_weight": 0, 00:13:06.882 "high_priority_weight": 0, 00:13:06.882 "nvme_adminq_poll_period_us": 10000, 00:13:06.882 "nvme_ioq_poll_period_us": 0, 00:13:06.882 "io_queue_requests": 0, 00:13:06.882 "delay_cmd_submit": true, 00:13:06.882 "transport_retry_count": 4, 00:13:06.882 "bdev_retry_count": 3, 00:13:06.882 "transport_ack_timeout": 0, 00:13:06.882 "ctrlr_loss_timeout_sec": 0, 00:13:06.882 "reconnect_delay_sec": 0, 00:13:06.882 "fast_io_fail_timeout_sec": 0, 00:13:06.882 "disable_auto_failback": false, 00:13:06.882 "generate_uuids": false, 00:13:06.882 "transport_tos": 0, 00:13:06.882 "nvme_error_stat": false, 00:13:06.882 "rdma_srq_size": 0, 00:13:06.882 "io_path_stat": false, 00:13:06.882 "allow_accel_sequence": false, 00:13:06.882 "rdma_max_cq_size": 0, 00:13:06.882 "rdma_cm_event_timeout_ms": 0, 00:13:06.882 "dhchap_digests": [ 00:13:06.882 "sha256", 00:13:06.882 "sha384", 00:13:06.882 "sha512" 00:13:06.882 ], 00:13:06.882 "dhchap_dhgroups": [ 00:13:06.882 "null", 00:13:06.882 "ffdhe2048", 00:13:06.882 "ffdhe3072", 00:13:06.882 "ffdhe4096", 00:13:06.882 "ffdhe6144", 00:13:06.882 "ffdhe8192" 00:13:06.882 ] 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "method": "bdev_nvme_set_hotplug", 00:13:06.882 "params": { 00:13:06.882 "period_us": 100000, 00:13:06.882 "enable": false 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "method": "bdev_malloc_create", 00:13:06.882 "params": { 00:13:06.882 "name": "malloc0", 00:13:06.882 "num_blocks": 8192, 00:13:06.882 "block_size": 4096, 00:13:06.882 "physical_block_size": 4096, 00:13:06.882 "uuid": "cad4b86e-ba6f-4715-b46f-6faadcb23229", 00:13:06.882 "optimal_io_boundary": 0 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "method": "bdev_wait_for_examine" 00:13:06.882 } 00:13:06.882 ] 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "subsystem": "nbd", 00:13:06.882 "config": [] 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "subsystem": "scheduler", 00:13:06.882 "config": [ 00:13:06.882 { 00:13:06.882 "method": "framework_set_scheduler", 00:13:06.882 "params": { 00:13:06.882 "name": "static" 00:13:06.882 } 00:13:06.882 } 00:13:06.882 ] 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "subsystem": "nvmf", 00:13:06.882 "config": [ 00:13:06.882 { 00:13:06.882 "method": "nvmf_set_config", 00:13:06.882 "params": { 00:13:06.882 "discovery_filter": "match_any", 00:13:06.882 "admin_cmd_passthru": { 00:13:06.882 "identify_ctrlr": false 00:13:06.882 } 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "method": "nvmf_set_max_subsystems", 00:13:06.882 "params": { 00:13:06.882 "max_subsystems": 1024 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "method": "nvmf_set_crdt", 00:13:06.882 "params": { 00:13:06.882 "crdt1": 0, 00:13:06.882 "crdt2": 0, 00:13:06.882 "crdt3": 0 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "method": "nvmf_create_transport", 00:13:06.882 "params": { 00:13:06.882 "trtype": "TCP", 00:13:06.882 "max_queue_depth": 128, 00:13:06.882 "max_io_qpairs_per_ctrlr": 127, 00:13:06.882 "in_capsule_data_size": 4096, 00:13:06.882 "max_io_size": 131072, 00:13:06.882 "io_unit_size": 131072, 00:13:06.882 "max_aq_depth": 128, 00:13:06.882 "num_shared_buffers": 511, 00:13:06.882 "buf_cache_size": 4294967295, 00:13:06.882 "dif_insert_or_strip": false, 00:13:06.882 "zcopy": false, 00:13:06.882 "c2h_success": false, 00:13:06.882 "sock_priority": 0, 00:13:06.882 "abort_timeout_sec": 1, 00:13:06.882 "ack_timeout": 0, 00:13:06.882 "data_wr_pool_size": 0 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "method": "nvmf_create_subsystem", 00:13:06.882 "params": { 00:13:06.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.882 "allow_any_host": false, 00:13:06.882 "serial_number": "00000000000000000000", 00:13:06.882 "model_number": "SPDK bdev Controller", 00:13:06.882 "max_namespaces": 32, 00:13:06.882 "min_cntlid": 1, 00:13:06.882 "max_cntlid": 65519, 00:13:06.882 "ana_reporting": false 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.882 "method": "nvmf_subsystem_add_host", 00:13:06.882 "params": { 00:13:06.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.882 "host": "nqn.2016-06.io.spdk:host1", 00:13:06.882 "psk": "key0" 00:13:06.882 } 00:13:06.882 }, 00:13:06.882 { 00:13:06.883 "method": "nvmf_subsystem_add_ns", 00:13:06.883 "params": { 00:13:06.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.883 "namespace": { 00:13:06.883 "nsid": 1, 00:13:06.883 "bdev_name": "malloc0", 00:13:06.883 "nguid": "CAD4B86EBA6F4715B46F6FAADCB23229", 00:13:06.883 "uuid": "cad4b86e-ba6f-4715-b46f-6faadcb23229", 00:13:06.883 "no_auto_visible": false 00:13:06.883 } 00:13:06.883 } 00:13:06.883 }, 00:13:06.883 { 00:13:06.883 "method": "nvmf_subsystem_add_listener", 00:13:06.883 "params": { 00:13:06.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.883 "listen_address": { 00:13:06.883 "trtype": "TCP", 00:13:06.883 "adrfam": "IPv4", 00:13:06.883 "traddr": "10.0.0.2", 00:13:06.883 "trsvcid": "4420" 00:13:06.883 }, 00:13:06.883 "secure_channel": false, 00:13:06.883 "sock_impl": "ssl" 00:13:06.883 } 00:13:06.883 } 00:13:06.883 ] 00:13:06.883 } 00:13:06.883 ] 00:13:06.883 }' 00:13:06.883 20:49:28 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:07.141 20:49:29 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:13:07.141 "subsystems": [ 00:13:07.141 { 00:13:07.141 "subsystem": "keyring", 00:13:07.141 "config": [ 00:13:07.141 { 00:13:07.141 "method": "keyring_file_add_key", 00:13:07.141 "params": { 00:13:07.141 "name": "key0", 00:13:07.141 "path": "/tmp/tmp.CnXej2IDDC" 00:13:07.141 } 00:13:07.141 } 00:13:07.141 ] 00:13:07.141 }, 00:13:07.141 { 00:13:07.141 "subsystem": "iobuf", 00:13:07.141 "config": [ 00:13:07.141 { 00:13:07.141 "method": "iobuf_set_options", 00:13:07.141 "params": { 00:13:07.141 "small_pool_count": 8192, 00:13:07.141 "large_pool_count": 1024, 00:13:07.141 "small_bufsize": 8192, 00:13:07.141 "large_bufsize": 135168 00:13:07.141 } 00:13:07.141 } 00:13:07.141 ] 00:13:07.141 }, 00:13:07.141 { 00:13:07.141 "subsystem": "sock", 00:13:07.141 "config": [ 00:13:07.141 { 00:13:07.141 "method": "sock_set_default_impl", 00:13:07.141 "params": { 00:13:07.141 "impl_name": "uring" 00:13:07.141 } 00:13:07.141 }, 00:13:07.141 { 00:13:07.141 "method": "sock_impl_set_options", 00:13:07.141 "params": { 00:13:07.141 "impl_name": "ssl", 00:13:07.141 "recv_buf_size": 4096, 00:13:07.141 "send_buf_size": 4096, 00:13:07.141 "enable_recv_pipe": true, 00:13:07.142 "enable_quickack": false, 00:13:07.142 "enable_placement_id": 0, 00:13:07.142 "enable_zerocopy_send_server": true, 00:13:07.142 "enable_zerocopy_send_client": false, 00:13:07.142 "zerocopy_threshold": 0, 00:13:07.142 "tls_version": 0, 00:13:07.142 "enable_ktls": false 00:13:07.142 } 00:13:07.142 }, 00:13:07.142 { 00:13:07.142 "method": "sock_impl_set_options", 00:13:07.142 "params": { 00:13:07.142 "impl_name": "posix", 00:13:07.142 "recv_buf_size": 2097152, 00:13:07.142 "send_buf_size": 2097152, 00:13:07.142 "enable_recv_pipe": true, 00:13:07.142 "enable_quickack": false, 00:13:07.142 "enable_placement_id": 0, 00:13:07.142 "enable_zerocopy_send_server": true, 00:13:07.142 "enable_zerocopy_send_client": false, 00:13:07.142 "zerocopy_threshold": 0, 00:13:07.142 "tls_version": 0, 00:13:07.142 "enable_ktls": false 00:13:07.142 } 00:13:07.142 }, 00:13:07.142 { 00:13:07.142 "method": "sock_impl_set_options", 00:13:07.142 "params": { 00:13:07.142 "impl_name": "uring", 00:13:07.142 "recv_buf_size": 2097152, 00:13:07.142 "send_buf_size": 2097152, 00:13:07.142 "enable_recv_pipe": true, 00:13:07.142 "enable_quickack": false, 00:13:07.142 "enable_placement_id": 0, 00:13:07.142 "enable_zerocopy_send_server": false, 00:13:07.142 "enable_zerocopy_send_client": false, 00:13:07.142 "zerocopy_threshold": 0, 00:13:07.142 "tls_version": 0, 00:13:07.142 "enable_ktls": false 00:13:07.142 } 00:13:07.142 } 00:13:07.142 ] 00:13:07.142 }, 00:13:07.142 { 00:13:07.142 "subsystem": "vmd", 00:13:07.142 "config": [] 00:13:07.142 }, 00:13:07.142 { 00:13:07.142 "subsystem": "accel", 00:13:07.142 "config": [ 00:13:07.142 { 00:13:07.142 "method": "accel_set_options", 00:13:07.142 "params": { 00:13:07.142 "small_cache_size": 128, 00:13:07.142 "large_cache_size": 16, 00:13:07.142 "task_count": 2048, 00:13:07.142 "sequence_count": 2048, 00:13:07.142 "buf_count": 2048 00:13:07.142 } 00:13:07.142 } 00:13:07.142 ] 00:13:07.142 }, 00:13:07.142 { 00:13:07.142 "subsystem": "bdev", 00:13:07.142 "config": [ 00:13:07.142 { 00:13:07.142 "method": "bdev_set_options", 00:13:07.142 "params": { 00:13:07.142 "bdev_io_pool_size": 65535, 00:13:07.142 "bdev_io_cache_size": 256, 00:13:07.142 "bdev_auto_examine": true, 00:13:07.142 "iobuf_small_cache_size": 128, 00:13:07.142 "iobuf_large_cache_size": 16 00:13:07.142 } 00:13:07.142 }, 00:13:07.142 { 00:13:07.142 "method": "bdev_raid_set_options", 00:13:07.142 "params": { 00:13:07.142 "process_window_size_kb": 1024 00:13:07.142 } 00:13:07.142 }, 00:13:07.142 { 00:13:07.142 "method": "bdev_iscsi_set_options", 00:13:07.142 "params": { 00:13:07.142 "timeout_sec": 30 00:13:07.142 } 00:13:07.142 }, 00:13:07.142 { 00:13:07.142 "method": "bdev_nvme_set_options", 00:13:07.142 "params": { 00:13:07.142 "action_on_timeout": "none", 00:13:07.142 "timeout_us": 0, 00:13:07.142 "timeout_admin_us": 0, 00:13:07.142 "keep_alive_timeout_ms": 10000, 00:13:07.142 "arbitration_burst": 0, 00:13:07.142 "low_priority_weight": 0, 00:13:07.142 "medium_priority_weight": 0, 00:13:07.142 "high_priority_weight": 0, 00:13:07.142 "nvme_adminq_poll_period_us": 10000, 00:13:07.142 "nvme_ioq_poll_period_us": 0, 00:13:07.142 "io_queue_requests": 512, 00:13:07.142 "delay_cmd_submit": true, 00:13:07.142 "transport_retry_count": 4, 00:13:07.142 "bdev_retry_count": 3, 00:13:07.142 "transport_ack_timeout": 0, 00:13:07.142 "ctrlr_loss_timeout_sec": 0, 00:13:07.142 "reconnect_delay_sec": 0, 00:13:07.142 "fast_io_fail_timeout_sec": 0, 00:13:07.142 "disable_auto_failback": false, 00:13:07.142 "generate_uuids": false, 00:13:07.142 "transport_tos": 0, 00:13:07.142 "nvme_error_stat": false, 00:13:07.142 "rdma_srq_size": 0, 00:13:07.142 "io_path_stat": false, 00:13:07.142 "allow_accel_sequence": false, 00:13:07.142 "rdma_max_cq_size": 0, 00:13:07.142 "rdma_cm_event_timeout_ms": 0, 00:13:07.142 "dhchap_digests": [ 00:13:07.142 "sha256", 00:13:07.142 "sha384", 00:13:07.142 "sha512" 00:13:07.142 ], 00:13:07.142 "dhchap_dhgroups": [ 00:13:07.142 "null", 00:13:07.142 "ffdhe2048", 00:13:07.142 "ffdhe3072", 00:13:07.142 "ffdhe4096", 00:13:07.142 "ffdhe6144", 00:13:07.142 "ffdhe8192" 00:13:07.142 ] 00:13:07.142 } 00:13:07.142 }, 00:13:07.142 { 00:13:07.142 "method": "bdev_nvme_attach_controller", 00:13:07.142 "params": { 00:13:07.142 "name": "nvme0", 00:13:07.142 "trtype": "TCP", 00:13:07.142 "adrfam": "IPv4", 00:13:07.142 "traddr": "10.0.0.2", 00:13:07.142 "trsvcid": "4420", 00:13:07.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.142 "prchk_reftag": false, 00:13:07.142 "prchk_guard": false, 00:13:07.142 "ctrlr_loss_timeout_sec": 0, 00:13:07.142 "reconnect_delay_sec": 0, 00:13:07.142 "fast_io_fail_timeout_sec": 0, 00:13:07.142 "psk": "key0", 00:13:07.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:07.142 "hdgst": false, 00:13:07.142 "ddgst": false 00:13:07.142 } 00:13:07.142 }, 00:13:07.142 { 00:13:07.142 "method": "bdev_nvme_set_hotplug", 00:13:07.142 "params": { 00:13:07.142 "period_us": 100000, 00:13:07.142 "enable": false 00:13:07.142 } 00:13:07.143 }, 00:13:07.143 { 00:13:07.143 "method": "bdev_enable_histogram", 00:13:07.143 "params": { 00:13:07.143 "name": "nvme0n1", 00:13:07.143 "enable": true 00:13:07.143 } 00:13:07.143 }, 00:13:07.143 { 00:13:07.143 "method": "bdev_wait_for_examine" 00:13:07.143 } 00:13:07.143 ] 00:13:07.143 }, 00:13:07.143 { 00:13:07.143 "subsystem": "nbd", 00:13:07.143 "config": [] 00:13:07.143 } 00:13:07.143 ] 00:13:07.143 }' 00:13:07.143 20:49:29 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 73434 00:13:07.143 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73434 ']' 00:13:07.143 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73434 00:13:07.143 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:07.143 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.143 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73434 00:13:07.400 killing process with pid 73434 00:13:07.400 Received shutdown signal, test time was about 1.000000 seconds 00:13:07.400 00:13:07.400 Latency(us) 00:13:07.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.400 =================================================================================================================== 00:13:07.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73434' 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73434 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73434 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 73402 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73402 ']' 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73402 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73402 00:13:07.401 killing process with pid 73402 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73402' 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73402 00:13:07.401 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73402 00:13:07.660 20:49:29 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:13:07.660 "subsystems": [ 00:13:07.660 { 00:13:07.660 "subsystem": "keyring", 00:13:07.660 "config": [ 00:13:07.660 { 00:13:07.660 "method": "keyring_file_add_key", 00:13:07.660 "params": { 00:13:07.660 "name": "key0", 00:13:07.660 "path": "/tmp/tmp.CnXej2IDDC" 00:13:07.660 } 00:13:07.660 } 00:13:07.660 ] 00:13:07.660 }, 00:13:07.660 { 00:13:07.660 "subsystem": "iobuf", 00:13:07.660 "config": [ 00:13:07.660 { 00:13:07.660 "method": "iobuf_set_options", 00:13:07.660 "params": { 00:13:07.660 "small_pool_count": 8192, 00:13:07.660 "large_pool_count": 1024, 00:13:07.660 "small_bufsize": 8192, 00:13:07.660 "large_bufsize": 135168 00:13:07.660 } 00:13:07.660 } 00:13:07.660 ] 00:13:07.660 }, 00:13:07.660 { 00:13:07.660 "subsystem": "sock", 00:13:07.660 "config": [ 00:13:07.660 { 00:13:07.660 "method": "sock_set_default_impl", 00:13:07.660 "params": { 00:13:07.660 "impl_name": "uring" 00:13:07.660 } 00:13:07.660 }, 00:13:07.660 { 00:13:07.660 "method": "sock_impl_set_options", 00:13:07.660 "params": { 00:13:07.660 "impl_name": "ssl", 00:13:07.660 "recv_buf_size": 4096, 00:13:07.660 "send_buf_size": 4096, 00:13:07.660 "enable_recv_pipe": true, 00:13:07.660 "enable_quickack": false, 00:13:07.660 "enable_placement_id": 0, 00:13:07.660 "enable_zerocopy_send_server": true, 00:13:07.660 "enable_zerocopy_send_client": false, 00:13:07.660 "zerocopy_threshold": 0, 00:13:07.660 "tls_version": 0, 00:13:07.660 "enable_ktls": false 00:13:07.660 } 00:13:07.660 }, 00:13:07.660 { 00:13:07.660 "method": "sock_impl_set_options", 00:13:07.660 "params": { 00:13:07.660 "impl_name": "posix", 00:13:07.660 "recv_buf_size": 2097152, 00:13:07.660 "send_buf_size": 2097152, 00:13:07.660 "enable_recv_pipe": true, 00:13:07.660 "enable_quickack": false, 00:13:07.660 "enable_placement_id": 0, 00:13:07.660 "enable_zerocopy_send_server": true, 00:13:07.660 "enable_zerocopy_send_client": false, 00:13:07.660 "zerocopy_threshold": 0, 00:13:07.660 "tls_version": 0, 00:13:07.660 "enable_ktls": false 00:13:07.660 } 00:13:07.660 }, 00:13:07.660 { 00:13:07.660 "method": "sock_impl_set_options", 00:13:07.660 "params": { 00:13:07.660 "impl_name": "uring", 00:13:07.660 "recv_buf_size": 2097152, 00:13:07.660 "send_buf_size": 2097152, 00:13:07.661 "enable_recv_pipe": true, 00:13:07.661 "enable_quickack": false, 00:13:07.661 "enable_placement_id": 0, 00:13:07.661 "enable_zerocopy_send_server": false, 00:13:07.661 "enable_zerocopy_send_client": false, 00:13:07.661 "zerocopy_threshold": 0, 00:13:07.661 "tls_version": 0, 00:13:07.661 "enable_ktls": false 00:13:07.661 } 00:13:07.661 } 00:13:07.661 ] 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "subsystem": "vmd", 00:13:07.661 "config": [] 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "subsystem": "accel", 00:13:07.661 "config": [ 00:13:07.661 { 00:13:07.661 "method": "accel_set_options", 00:13:07.661 "params": { 00:13:07.661 "small_cache_size": 128, 00:13:07.661 "large_cache_size": 16, 00:13:07.661 "task_count": 2048, 00:13:07.661 "sequence_count": 2048, 00:13:07.661 "buf_count": 2048 00:13:07.661 } 00:13:07.661 } 00:13:07.661 ] 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "subsystem": "bdev", 00:13:07.661 "config": [ 00:13:07.661 { 00:13:07.661 "method": "bdev_set_options", 00:13:07.661 "params": { 00:13:07.661 "bdev_io_pool_size": 65535, 00:13:07.661 "bdev_io_cache_size": 256, 00:13:07.661 "bdev_auto_examine": true, 00:13:07.661 "iobuf_small_cache_size": 128, 00:13:07.661 "iobuf_large_cache_size": 16 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "bdev_raid_set_options", 00:13:07.661 "params": { 00:13:07.661 "process_window_size_kb": 1024 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "bdev_iscsi_set_options", 00:13:07.661 "params": { 00:13:07.661 "timeout_sec": 30 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "bdev_nvme_set_options", 00:13:07.661 "params": { 00:13:07.661 "action_on_timeout": "none", 00:13:07.661 "timeout_us": 0, 00:13:07.661 "timeout_admin_us": 0, 00:13:07.661 "keep_alive_timeout_ms": 10000, 00:13:07.661 "arbitration_burst": 0, 00:13:07.661 "low_priority_weight": 0, 00:13:07.661 "medium_priority_weight": 0, 00:13:07.661 "high_priority_weight": 0, 00:13:07.661 "nvme_adminq_poll_period_us": 10000, 00:13:07.661 "nvme_ioq_poll_period_us": 0, 00:13:07.661 "io_queue_requests": 0, 00:13:07.661 "delay_cmd_submit": true, 00:13:07.661 "transport_retry_count": 4, 00:13:07.661 "bdev_retry_count": 3, 00:13:07.661 "transport_ack_timeout": 0, 00:13:07.661 "ctrlr_loss_timeout_sec": 0, 00:13:07.661 "reconnect_delay_sec": 0, 00:13:07.661 "fast_io_fail_timeout_sec": 0, 00:13:07.661 "disable_auto_failback": false, 00:13:07.661 "generate_uuids": false, 00:13:07.661 "transport_tos": 0, 00:13:07.661 "nvme_error_stat": false, 00:13:07.661 "rdma_srq_size": 0, 00:13:07.661 "io_path_stat": false, 00:13:07.661 "allow_accel_sequence": false, 00:13:07.661 "rdma_max_cq_size": 0, 00:13:07.661 "rdma_cm_event_timeout_ms": 0, 00:13:07.661 "dhchap_digests": [ 00:13:07.661 "sha256", 00:13:07.661 "sha384", 00:13:07.661 "sha512" 00:13:07.661 ], 00:13:07.661 "dhchap_dhgroups": [ 00:13:07.661 "null", 00:13:07.661 "ffdhe2048", 00:13:07.661 "ffdhe3072", 00:13:07.661 "ffdhe4096", 00:13:07.661 "ffdhe6144", 00:13:07.661 "ffdhe8192" 00:13:07.661 ] 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "bdev_nvme_set_hotplug", 00:13:07.661 "params": { 00:13:07.661 "period_us": 100000, 00:13:07.661 "enable": false 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "bdev_malloc_create", 00:13:07.661 "params": { 00:13:07.661 "name": "malloc0", 00:13:07.661 "num_blocks": 8192, 00:13:07.661 "block_size": 4096, 00:13:07.661 "physical_block_size": 4096, 00:13:07.661 "uuid": "cad4b86e-ba6f-4715-b46f-6faadcb23229", 00:13:07.661 "optimal_io_boundary": 0 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "bdev_wait_for_examine" 00:13:07.661 } 00:13:07.661 ] 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "subsystem": "nbd", 00:13:07.661 "config": [] 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "subsystem": "scheduler", 00:13:07.661 "config": [ 00:13:07.661 { 00:13:07.661 "method": "framework_set_scheduler", 00:13:07.661 "params": { 00:13:07.661 "name": "static" 00:13:07.661 } 00:13:07.661 } 00:13:07.661 ] 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "subsystem": "nvmf", 00:13:07.661 "config": [ 00:13:07.661 { 00:13:07.661 "method": "nvmf_set_config", 00:13:07.661 "params": { 00:13:07.661 "discovery_filter": "match_any", 00:13:07.661 "admin_cmd_passthru": { 00:13:07.661 "identify_ctrlr": false 00:13:07.661 } 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "nvmf_set_max_subsystems", 00:13:07.661 "params": { 00:13:07.661 "max_subsystems": 1024 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "nvmf_set_crdt", 00:13:07.661 "params": { 00:13:07.661 "crdt1": 0, 00:13:07.661 "crdt2": 0, 00:13:07.661 "crdt3": 0 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "nvmf_create_transport", 00:13:07.661 "params": { 00:13:07.661 "trtype": "TCP", 00:13:07.661 "max_queue_depth": 128, 00:13:07.661 "max_io_qpairs_per_ctrlr": 127, 00:13:07.661 "in_capsule_data_size": 4096, 00:13:07.661 "max_io_size": 131072, 00:13:07.661 "io_unit_size": 131072, 00:13:07.661 "max_aq_depth": 128, 00:13:07.661 "num_shared_buffers": 511, 00:13:07.661 "buf_cache_size": 4294967295, 00:13:07.661 "dif_insert_or_strip": false, 00:13:07.661 "zcopy": false, 00:13:07.661 "c2h_success": false, 00:13:07.661 "sock_priority": 0, 00:13:07.661 "abort_timeout_sec": 1, 00:13:07.661 "ack_timeout": 0, 00:13:07.661 "data_wr_pool_size": 0 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "nvmf_create_subsystem", 00:13:07.661 "params": { 00:13:07.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.661 "allow_any_host": false, 00:13:07.661 "serial_number": "00000000000000000000", 00:13:07.661 "model_number": "SPDK bdev Controller", 00:13:07.661 "max_namespaces": 32, 00:13:07.661 "min_cntlid": 1, 00:13:07.661 "max_cntlid": 65519, 00:13:07.661 "ana_reporting": false 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "nvmf_subsystem_add_host", 00:13:07.661 "params": { 00:13:07.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.661 "host": "nqn.2016-06.io.spdk:host1", 00:13:07.661 "psk": "key0" 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "nvmf_subsystem_add_ns", 00:13:07.661 "params": { 00:13:07.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.661 "namespace": { 00:13:07.661 "nsid": 1, 00:13:07.661 "bdev_name": "malloc0", 00:13:07.661 "nguid": "CAD4B86EBA6F4715B46F6FAADCB23229", 00:13:07.661 "uuid": "cad4b86e-ba6f-4715-b46f-6faadcb23229", 00:13:07.661 "no_auto_visible": false 00:13:07.661 } 00:13:07.661 } 00:13:07.661 }, 00:13:07.661 { 00:13:07.661 "method": "nvmf_subsystem_add_listener", 00:13:07.661 "params": { 00:13:07.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.661 "listen_address": { 00:13:07.661 "trtype": "TCP", 00:13:07.661 "adrfam": "IPv4", 00:13:07.661 "traddr": "10.0.0.2", 00:13:07.661 "trsvcid": "4420" 00:13:07.661 }, 00:13:07.661 "secure_channel": false, 00:13:07.661 "sock_impl": "ssl" 00:13:07.661 } 00:13:07.661 } 00:13:07.661 ] 00:13:07.661 } 00:13:07.661 ] 00:13:07.661 }' 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73489 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73489 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73489 ']' 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.661 20:49:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:07.661 [2024-07-15 20:49:29.536253] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:07.661 [2024-07-15 20:49:29.536313] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.921 [2024-07-15 20:49:29.678142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.921 [2024-07-15 20:49:29.754417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.921 [2024-07-15 20:49:29.754465] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.921 [2024-07-15 20:49:29.754475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.921 [2024-07-15 20:49:29.754483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.921 [2024-07-15 20:49:29.754489] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.921 [2024-07-15 20:49:29.754560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.181 [2024-07-15 20:49:29.907727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:08.181 [2024-07-15 20:49:29.974003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.181 [2024-07-15 20:49:30.005897] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:08.181 [2024-07-15 20:49:30.006081] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.750 20:49:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.750 20:49:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:08.750 20:49:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:08.750 20:49:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:08.750 20:49:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.750 20:49:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.751 20:49:30 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=73521 00:13:08.751 20:49:30 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 73521 /var/tmp/bdevperf.sock 00:13:08.751 20:49:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73521 ']' 00:13:08.751 20:49:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:08.751 20:49:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.751 20:49:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:08.751 20:49:30 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:13:08.751 "subsystems": [ 00:13:08.751 { 00:13:08.751 "subsystem": "keyring", 00:13:08.751 "config": [ 00:13:08.751 { 00:13:08.751 "method": "keyring_file_add_key", 00:13:08.751 "params": { 00:13:08.751 "name": "key0", 00:13:08.751 "path": "/tmp/tmp.CnXej2IDDC" 00:13:08.751 } 00:13:08.751 } 00:13:08.751 ] 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "subsystem": "iobuf", 00:13:08.751 "config": [ 00:13:08.751 { 00:13:08.751 "method": "iobuf_set_options", 00:13:08.751 "params": { 00:13:08.751 "small_pool_count": 8192, 00:13:08.751 "large_pool_count": 1024, 00:13:08.751 "small_bufsize": 8192, 00:13:08.751 "large_bufsize": 135168 00:13:08.751 } 00:13:08.751 } 00:13:08.751 ] 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "subsystem": "sock", 00:13:08.751 "config": [ 00:13:08.751 { 00:13:08.751 "method": "sock_set_default_impl", 00:13:08.751 "params": { 00:13:08.751 "impl_name": "uring" 00:13:08.751 } 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "method": "sock_impl_set_options", 00:13:08.751 "params": { 00:13:08.751 "impl_name": "ssl", 00:13:08.751 "recv_buf_size": 4096, 00:13:08.751 "send_buf_size": 4096, 00:13:08.751 "enable_recv_pipe": true, 00:13:08.751 "enable_quickack": false, 00:13:08.751 "enable_placement_id": 0, 00:13:08.751 "enable_zerocopy_send_server": true, 00:13:08.751 "enable_zerocopy_send_client": false, 00:13:08.751 "zerocopy_threshold": 0, 00:13:08.751 "tls_version": 0, 00:13:08.751 "enable_ktls": false 00:13:08.751 } 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "method": "sock_impl_set_options", 00:13:08.751 "params": { 00:13:08.751 "impl_name": "posix", 00:13:08.751 "recv_buf_size": 2097152, 00:13:08.751 "send_buf_size": 2097152, 00:13:08.751 "enable_recv_pipe": true, 00:13:08.751 "enable_quickack": false, 00:13:08.751 "enable_placement_id": 0, 00:13:08.751 "enable_zerocopy_send_server": true, 00:13:08.751 "enable_zerocopy_send_client": false, 00:13:08.751 "zerocopy_threshold": 0, 00:13:08.751 "tls_version": 0, 00:13:08.751 "enable_ktls": false 00:13:08.751 } 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "method": "sock_impl_set_options", 00:13:08.751 "params": { 00:13:08.751 "impl_name": "uring", 00:13:08.751 "recv_buf_size": 2097152, 00:13:08.751 "send_buf_size": 2097152, 00:13:08.751 "enable_recv_pipe": true, 00:13:08.751 "enable_quickack": false, 00:13:08.751 "enable_placement_id": 0, 00:13:08.751 "enable_zerocopy_send_server": false, 00:13:08.751 "enable_zerocopy_send_client": false, 00:13:08.751 "zerocopy_threshold": 0, 00:13:08.751 "tls_version": 0, 00:13:08.751 "enable_ktls": false 00:13:08.751 } 00:13:08.751 } 00:13:08.751 ] 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "subsystem": "vmd", 00:13:08.751 "config": [] 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "subsystem": "accel", 00:13:08.751 "config": [ 00:13:08.751 { 00:13:08.751 "method": "accel_set_options", 00:13:08.751 "params": { 00:13:08.751 "small_cache_size": 128, 00:13:08.751 "large_cache_size": 16, 00:13:08.751 "task_count": 2048, 00:13:08.751 "sequence_count": 2048, 00:13:08.751 "buf_count": 2048 00:13:08.751 } 00:13:08.751 } 00:13:08.751 ] 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "subsystem": "bdev", 00:13:08.751 "config": [ 00:13:08.751 { 00:13:08.751 "method": "bdev_set_options", 00:13:08.751 "params": { 00:13:08.751 "bdev_io_pool_size": 65535, 00:13:08.751 "bdev_io_cache_size": 256, 00:13:08.751 "bdev_auto_examine": true, 00:13:08.751 "iobuf_small_cache_size": 128, 00:13:08.751 "iobuf_large_cache_size": 16 00:13:08.751 } 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "method": "bdev_raid_set_options", 00:13:08.751 "params": { 00:13:08.751 "process_window_size_kb": 1024 00:13:08.751 } 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "method": "bdev_iscsi_set_options", 00:13:08.751 "params": { 00:13:08.751 "timeout_sec": 30 00:13:08.751 } 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "method": "bdev_nvme_set_options", 00:13:08.751 "params": { 00:13:08.751 "action_on_timeout": "none", 00:13:08.751 "timeout_us": 0, 00:13:08.751 "timeout_admin_us": 0, 00:13:08.751 "keep_alive_timeout_ms": 10000, 00:13:08.751 "arbitration_burst": 0, 00:13:08.751 "low_priority_weight": 0, 00:13:08.751 "medium_priority_weight": 0, 00:13:08.751 "high_priority_weight": 0, 00:13:08.751 "nvme_adminq_poll_period_us": 10000, 00:13:08.751 "nvme_ioq_poll_period_us": 0, 00:13:08.751 "io_queue_requests": 512, 00:13:08.751 "delay_cmd_submit": true, 00:13:08.751 "transport_retry_count": 4, 00:13:08.751 "bdev_retry_count": 3, 00:13:08.751 "transport_ack_timeout": 0, 00:13:08.751 "ctrlr_loss_timeout_sec": 0, 00:13:08.751 "reconnect_delay_sec": 0, 00:13:08.751 "fast_io_fail_timeout_sec": 0, 00:13:08.751 "disable_auto_failback": false, 00:13:08.751 "generate_uuids": false, 00:13:08.751 "transport_tos": 0, 00:13:08.751 "nvme_error_stat": false, 00:13:08.751 "rdma_srq_size": 0, 00:13:08.751 "io_path_stat": false, 00:13:08.751 "allow_accel_sequence": false, 00:13:08.751 "rdma_max_cq_size": 0, 00:13:08.751 "rdma_cm_event_timeout_ms": 0, 00:13:08.751 "dhchap_digests": [ 00:13:08.751 "sha256", 00:13:08.751 "sha384", 00:13:08.751 "sha512" 00:13:08.751 ], 00:13:08.751 "dhchap_dhgroups": [ 00:13:08.751 "null", 00:13:08.751 "ffdhe2048", 00:13:08.751 "ffdhe3072", 00:13:08.751 "ffdhe4096", 00:13:08.751 "ffdhe6144", 00:13:08.751 "ffdhe8192" 00:13:08.751 ] 00:13:08.751 } 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "method": "bdev_nvme_attach_controller", 00:13:08.751 "params": { 00:13:08.751 "name": "nvme0", 00:13:08.751 "trtype": "TCP", 00:13:08.751 "adrfam": "IPv4", 00:13:08.751 "traddr": "10.0.0.2", 00:13:08.751 "trsvcid": "4420", 00:13:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.751 "prchk_reftag": false, 00:13:08.751 "prchk_guard": false, 00:13:08.751 "ctrlr_loss_timeout_sec": 0, 00:13:08.751 "reconnect_delay_sec": 0, 00:13:08.751 "fast_io_fail_timeout_sec": 0, 00:13:08.751 "psk": "key0", 00:13:08.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:08.751 "hdgst": false, 00:13:08.751 "ddgst": false 00:13:08.751 } 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "method": "bdev_nvme_set_hotplug", 00:13:08.751 "params": { 00:13:08.751 "period_us": 100000, 00:13:08.751 "enable": false 00:13:08.751 } 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "method": "bdev_enable_histogram", 00:13:08.751 "params": { 00:13:08.751 "name": "nvme0n1", 00:13:08.751 "enable": true 00:13:08.751 } 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "method": "bdev_wait_for_examine" 00:13:08.751 } 00:13:08.751 ] 00:13:08.751 }, 00:13:08.751 { 00:13:08.751 "subsystem": "nbd", 00:13:08.751 "config": [] 00:13:08.751 } 00:13:08.751 ] 00:13:08.751 }' 00:13:08.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:08.751 20:49:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.751 20:49:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.751 20:49:30 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:13:08.751 [2024-07-15 20:49:30.468513] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:08.751 [2024-07-15 20:49:30.468732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73521 ] 00:13:08.751 [2024-07-15 20:49:30.610066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.012 [2024-07-15 20:49:30.697096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.012 [2024-07-15 20:49:30.819148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:09.012 [2024-07-15 20:49:30.857595] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:09.580 20:49:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.580 20:49:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:09.580 20:49:31 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:09.580 20:49:31 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:13:09.580 20:49:31 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.580 20:49:31 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:09.839 Running I/O for 1 seconds... 00:13:10.775 00:13:10.775 Latency(us) 00:13:10.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.775 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:10.775 Verification LBA range: start 0x0 length 0x2000 00:13:10.775 nvme0n1 : 1.01 5889.25 23.00 0.00 0.00 21577.27 4605.94 15897.09 00:13:10.775 =================================================================================================================== 00:13:10.775 Total : 5889.25 23.00 0.00 0.00 21577.27 4605.94 15897.09 00:13:10.775 0 00:13:10.775 20:49:32 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:13:10.775 20:49:32 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:13:10.775 20:49:32 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:13:10.775 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:13:10.775 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:13:10.775 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:10.775 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:10.775 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:10.775 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:10.775 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:10.775 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:10.775 nvmf_trace.0 00:13:11.034 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 73521 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73521 ']' 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73521 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73521 00:13:11.035 killing process with pid 73521 00:13:11.035 Received shutdown signal, test time was about 1.000000 seconds 00:13:11.035 00:13:11.035 Latency(us) 00:13:11.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.035 =================================================================================================================== 00:13:11.035 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73521' 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73521 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73521 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:11.035 20:49:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:11.294 rmmod nvme_tcp 00:13:11.294 rmmod nvme_fabrics 00:13:11.294 rmmod nvme_keyring 00:13:11.294 20:49:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.294 20:49:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:13:11.294 20:49:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:13:11.294 20:49:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73489 ']' 00:13:11.294 20:49:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73489 00:13:11.294 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73489 ']' 00:13:11.294 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73489 00:13:11.294 20:49:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:11.294 20:49:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.294 20:49:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73489 00:13:11.294 20:49:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:11.294 killing process with pid 73489 00:13:11.294 20:49:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:11.294 20:49:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73489' 00:13:11.294 20:49:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73489 00:13:11.294 20:49:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73489 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.rAO7dqCM6w /tmp/tmp.RZiFpHS4Ag /tmp/tmp.CnXej2IDDC 00:13:11.558 00:13:11.558 real 1m19.546s 00:13:11.558 user 1m59.326s 00:13:11.558 sys 0m28.889s 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.558 ************************************ 00:13:11.558 END TEST nvmf_tls 00:13:11.558 ************************************ 00:13:11.558 20:49:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:11.558 20:49:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:11.558 20:49:33 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:11.558 20:49:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:11.558 20:49:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.558 20:49:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:11.558 ************************************ 00:13:11.558 START TEST nvmf_fips 00:13:11.558 ************************************ 00:13:11.558 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:11.819 * Looking for test storage... 00:13:11.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:11.819 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:13:11.820 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:13:12.078 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:13:12.078 Error setting digest 00:13:12.079 00022BFCEF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:13:12.079 00022BFCEF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:12.079 Cannot find device "nvmf_tgt_br" 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:12.079 Cannot find device "nvmf_tgt_br2" 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:12.079 Cannot find device "nvmf_tgt_br" 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:12.079 Cannot find device "nvmf_tgt_br2" 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.079 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:13:12.079 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:12.339 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:12.339 20:49:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:12.339 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:12.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:13:12.340 00:13:12.340 --- 10.0.0.2 ping statistics --- 00:13:12.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.340 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:12.340 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:12.340 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:13:12.340 00:13:12.340 --- 10.0.0.3 ping statistics --- 00:13:12.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.340 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:12.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:12.340 00:13:12.340 --- 10.0.0.1 ping statistics --- 00:13:12.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.340 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.340 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.598 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.598 20:49:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:13:12.598 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:12.599 20:49:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:12.599 20:49:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:12.599 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73785 00:13:12.599 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:12.599 20:49:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73785 00:13:12.599 20:49:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 73785 ']' 00:13:12.599 20:49:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.599 20:49:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.599 20:49:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.599 20:49:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.599 20:49:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:12.599 [2024-07-15 20:49:34.355856] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:12.599 [2024-07-15 20:49:34.355928] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.599 [2024-07-15 20:49:34.498399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.857 [2024-07-15 20:49:34.583461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.857 [2024-07-15 20:49:34.583512] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.857 [2024-07-15 20:49:34.583522] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.857 [2024-07-15 20:49:34.583530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.857 [2024-07-15 20:49:34.583537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.857 [2024-07-15 20:49:34.583567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.857 [2024-07-15 20:49:34.624501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:13.425 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:13.684 [2024-07-15 20:49:35.393796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.684 [2024-07-15 20:49:35.409724] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:13.684 [2024-07-15 20:49:35.409893] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.684 [2024-07-15 20:49:35.438422] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:13.684 malloc0 00:13:13.684 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:13.684 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73819 00:13:13.684 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:13.684 20:49:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73819 /var/tmp/bdevperf.sock 00:13:13.684 20:49:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 73819 ']' 00:13:13.684 20:49:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:13.684 20:49:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.684 20:49:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:13.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:13.684 20:49:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.684 20:49:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:13.684 [2024-07-15 20:49:35.536997] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:13.684 [2024-07-15 20:49:35.537070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73819 ] 00:13:13.943 [2024-07-15 20:49:35.677709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.943 [2024-07-15 20:49:35.766733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.943 [2024-07-15 20:49:35.808025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:14.511 20:49:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.511 20:49:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:13:14.511 20:49:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:14.770 [2024-07-15 20:49:36.529408] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:14.770 [2024-07-15 20:49:36.529510] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:14.770 TLSTESTn1 00:13:14.770 20:49:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:15.028 Running I/O for 10 seconds... 00:13:25.037 00:13:25.037 Latency(us) 00:13:25.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.037 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:25.037 Verification LBA range: start 0x0 length 0x2000 00:13:25.037 TLSTESTn1 : 10.01 5806.16 22.68 0.00 0.00 22011.69 4737.54 16212.92 00:13:25.037 =================================================================================================================== 00:13:25.037 Total : 5806.16 22.68 0.00 0.00 22011.69 4737.54 16212.92 00:13:25.037 0 00:13:25.037 20:49:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:13:25.037 20:49:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:13:25.037 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:13:25.037 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:13:25.037 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:25.037 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:25.037 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:25.037 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:25.038 nvmf_trace.0 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73819 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 73819 ']' 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 73819 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73819 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73819' 00:13:25.038 killing process with pid 73819 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 73819 00:13:25.038 Received shutdown signal, test time was about 10.000000 seconds 00:13:25.038 00:13:25.038 Latency(us) 00:13:25.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.038 =================================================================================================================== 00:13:25.038 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:25.038 [2024-07-15 20:49:46.868582] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:25.038 20:49:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 73819 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.296 rmmod nvme_tcp 00:13:25.296 rmmod nvme_fabrics 00:13:25.296 rmmod nvme_keyring 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73785 ']' 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73785 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 73785 ']' 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 73785 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73785 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:25.296 killing process with pid 73785 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73785' 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 73785 00:13:25.296 [2024-07-15 20:49:47.184303] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:25.296 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 73785 00:13:25.554 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.554 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.554 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.554 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.554 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.554 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.554 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.554 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.554 20:49:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:25.812 20:49:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:25.812 00:13:25.812 real 0m14.082s 00:13:25.812 user 0m18.162s 00:13:25.812 sys 0m6.023s 00:13:25.812 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:25.812 20:49:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:25.812 ************************************ 00:13:25.812 END TEST nvmf_fips 00:13:25.812 ************************************ 00:13:25.812 20:49:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:25.812 20:49:47 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:13:25.812 20:49:47 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:13:25.812 20:49:47 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:13:25.812 20:49:47 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:25.812 20:49:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:25.812 20:49:47 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:13:25.812 20:49:47 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:25.812 20:49:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:25.812 20:49:47 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:13:25.812 20:49:47 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:25.812 20:49:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:25.812 20:49:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.812 20:49:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:25.812 ************************************ 00:13:25.812 START TEST nvmf_identify 00:13:25.812 ************************************ 00:13:25.812 20:49:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:26.071 * Looking for test storage... 00:13:26.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:26.071 Cannot find device "nvmf_tgt_br" 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:26.071 Cannot find device "nvmf_tgt_br2" 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:26.071 Cannot find device "nvmf_tgt_br" 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:26.071 Cannot find device "nvmf_tgt_br2" 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:26.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:26.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:26.071 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:26.330 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:26.330 20:49:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:26.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:13:26.330 00:13:26.330 --- 10.0.0.2 ping statistics --- 00:13:26.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.330 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:26.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:26.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:13:26.330 00:13:26.330 --- 10.0.0.3 ping statistics --- 00:13:26.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.330 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:26.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:26.330 00:13:26.330 --- 10.0.0.1 ping statistics --- 00:13:26.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.330 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74173 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74173 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74173 ']' 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.330 20:49:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:26.588 [2024-07-15 20:49:48.274312] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:26.588 [2024-07-15 20:49:48.274372] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.588 [2024-07-15 20:49:48.418119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.882 [2024-07-15 20:49:48.503671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.882 [2024-07-15 20:49:48.503718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.883 [2024-07-15 20:49:48.503743] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.883 [2024-07-15 20:49:48.503751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.883 [2024-07-15 20:49:48.503758] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.883 [2024-07-15 20:49:48.504249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.883 [2024-07-15 20:49:48.504426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.883 [2024-07-15 20:49:48.504620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.883 [2024-07-15 20:49:48.504621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.883 [2024-07-15 20:49:48.545310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 [2024-07-15 20:49:49.095347] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:27.479 Malloc0 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.479 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 [2024-07-15 20:49:49.228913] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:27.480 [ 00:13:27.480 { 00:13:27.480 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:27.480 "subtype": "Discovery", 00:13:27.480 "listen_addresses": [ 00:13:27.480 { 00:13:27.480 "trtype": "TCP", 00:13:27.480 "adrfam": "IPv4", 00:13:27.480 "traddr": "10.0.0.2", 00:13:27.480 "trsvcid": "4420" 00:13:27.480 } 00:13:27.480 ], 00:13:27.480 "allow_any_host": true, 00:13:27.480 "hosts": [] 00:13:27.480 }, 00:13:27.480 { 00:13:27.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.480 "subtype": "NVMe", 00:13:27.480 "listen_addresses": [ 00:13:27.480 { 00:13:27.480 "trtype": "TCP", 00:13:27.480 "adrfam": "IPv4", 00:13:27.480 "traddr": "10.0.0.2", 00:13:27.480 "trsvcid": "4420" 00:13:27.480 } 00:13:27.480 ], 00:13:27.480 "allow_any_host": true, 00:13:27.480 "hosts": [], 00:13:27.480 "serial_number": "SPDK00000000000001", 00:13:27.480 "model_number": "SPDK bdev Controller", 00:13:27.480 "max_namespaces": 32, 00:13:27.480 "min_cntlid": 1, 00:13:27.480 "max_cntlid": 65519, 00:13:27.480 "namespaces": [ 00:13:27.480 { 00:13:27.480 "nsid": 1, 00:13:27.480 "bdev_name": "Malloc0", 00:13:27.480 "name": "Malloc0", 00:13:27.480 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:27.480 "eui64": "ABCDEF0123456789", 00:13:27.480 "uuid": "af41ed73-d137-40b2-8e08-59cd8c02e821" 00:13:27.480 } 00:13:27.480 ] 00:13:27.480 } 00:13:27.480 ] 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.480 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:27.480 [2024-07-15 20:49:49.301025] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:27.480 [2024-07-15 20:49:49.301076] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74208 ] 00:13:27.742 [2024-07-15 20:49:49.437619] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:13:27.742 [2024-07-15 20:49:49.437680] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:27.742 [2024-07-15 20:49:49.437686] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:27.742 [2024-07-15 20:49:49.437699] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:27.742 [2024-07-15 20:49:49.437708] sock.c: 347:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:27.742 [2024-07-15 20:49:49.437830] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:13:27.742 [2024-07-15 20:49:49.437868] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5a32c0 0 00:13:27.742 [2024-07-15 20:49:49.453180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:27.742 [2024-07-15 20:49:49.453199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:27.742 [2024-07-15 20:49:49.453204] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:27.742 [2024-07-15 20:49:49.453207] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:27.742 [2024-07-15 20:49:49.453255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.742 [2024-07-15 20:49:49.453261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.742 [2024-07-15 20:49:49.453265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a32c0) 00:13:27.742 [2024-07-15 20:49:49.453277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:27.742 [2024-07-15 20:49:49.453300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4940, cid 0, qid 0 00:13:27.742 [2024-07-15 20:49:49.458209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.742 [2024-07-15 20:49:49.458225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.742 [2024-07-15 20:49:49.458230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.742 [2024-07-15 20:49:49.458235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4940) on tqpair=0x5a32c0 00:13:27.742 [2024-07-15 20:49:49.458246] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:27.742 [2024-07-15 20:49:49.458253] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:13:27.742 [2024-07-15 20:49:49.458259] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:13:27.742 [2024-07-15 20:49:49.458274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.742 [2024-07-15 20:49:49.458279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.742 [2024-07-15 20:49:49.458284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a32c0) 00:13:27.742 [2024-07-15 20:49:49.458291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.742 [2024-07-15 20:49:49.458310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4940, cid 0, qid 0 00:13:27.742 [2024-07-15 20:49:49.458354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.742 [2024-07-15 20:49:49.458361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.742 [2024-07-15 20:49:49.458364] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.742 [2024-07-15 20:49:49.458368] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4940) on tqpair=0x5a32c0 00:13:27.742 [2024-07-15 20:49:49.458373] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:13:27.743 [2024-07-15 20:49:49.458380] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:13:27.743 [2024-07-15 20:49:49.458387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a32c0) 00:13:27.743 [2024-07-15 20:49:49.458400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.743 [2024-07-15 20:49:49.458413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4940, cid 0, qid 0 00:13:27.743 [2024-07-15 20:49:49.458452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.743 [2024-07-15 20:49:49.458457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.743 [2024-07-15 20:49:49.458461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4940) on tqpair=0x5a32c0 00:13:27.743 [2024-07-15 20:49:49.458470] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:13:27.743 [2024-07-15 20:49:49.458477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:13:27.743 [2024-07-15 20:49:49.458484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a32c0) 00:13:27.743 [2024-07-15 20:49:49.458497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.743 [2024-07-15 20:49:49.458509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4940, cid 0, qid 0 00:13:27.743 [2024-07-15 20:49:49.458542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.743 [2024-07-15 20:49:49.458548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.743 [2024-07-15 20:49:49.458552] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4940) on tqpair=0x5a32c0 00:13:27.743 [2024-07-15 20:49:49.458560] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:27.743 [2024-07-15 20:49:49.458569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a32c0) 00:13:27.743 [2024-07-15 20:49:49.458582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.743 [2024-07-15 20:49:49.458594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4940, cid 0, qid 0 00:13:27.743 [2024-07-15 20:49:49.458627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.743 [2024-07-15 20:49:49.458632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.743 [2024-07-15 20:49:49.458636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4940) on tqpair=0x5a32c0 00:13:27.743 [2024-07-15 20:49:49.458644] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:13:27.743 [2024-07-15 20:49:49.458649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:13:27.743 [2024-07-15 20:49:49.458656] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:27.743 [2024-07-15 20:49:49.458762] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:13:27.743 [2024-07-15 20:49:49.458768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:27.743 [2024-07-15 20:49:49.458776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a32c0) 00:13:27.743 [2024-07-15 20:49:49.458789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.743 [2024-07-15 20:49:49.458802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4940, cid 0, qid 0 00:13:27.743 [2024-07-15 20:49:49.458839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.743 [2024-07-15 20:49:49.458844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.743 [2024-07-15 20:49:49.458848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4940) on tqpair=0x5a32c0 00:13:27.743 [2024-07-15 20:49:49.458857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:27.743 [2024-07-15 20:49:49.458865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a32c0) 00:13:27.743 [2024-07-15 20:49:49.458878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.743 [2024-07-15 20:49:49.458890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4940, cid 0, qid 0 00:13:27.743 [2024-07-15 20:49:49.458928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.743 [2024-07-15 20:49:49.458934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.743 [2024-07-15 20:49:49.458938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4940) on tqpair=0x5a32c0 00:13:27.743 [2024-07-15 20:49:49.458946] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:27.743 [2024-07-15 20:49:49.458950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:13:27.743 [2024-07-15 20:49:49.458958] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:13:27.743 [2024-07-15 20:49:49.458967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:13:27.743 [2024-07-15 20:49:49.458975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.458979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a32c0) 00:13:27.743 [2024-07-15 20:49:49.458986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.743 [2024-07-15 20:49:49.458998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4940, cid 0, qid 0 00:13:27.743 [2024-07-15 20:49:49.459065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:27.743 [2024-07-15 20:49:49.459071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:27.743 [2024-07-15 20:49:49.459075] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459078] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a32c0): datao=0, datal=4096, cccid=0 00:13:27.743 [2024-07-15 20:49:49.459084] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e4940) on tqpair(0x5a32c0): expected_datao=0, payload_size=4096 00:13:27.743 [2024-07-15 20:49:49.459088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459095] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459099] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.743 [2024-07-15 20:49:49.459112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.743 [2024-07-15 20:49:49.459116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4940) on tqpair=0x5a32c0 00:13:27.743 [2024-07-15 20:49:49.459127] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:13:27.743 [2024-07-15 20:49:49.459132] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:13:27.743 [2024-07-15 20:49:49.459137] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:13:27.743 [2024-07-15 20:49:49.459142] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:13:27.743 [2024-07-15 20:49:49.459146] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:13:27.743 [2024-07-15 20:49:49.459151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:13:27.743 [2024-07-15 20:49:49.459159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:13:27.743 [2024-07-15 20:49:49.459176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459184] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a32c0) 00:13:27.743 [2024-07-15 20:49:49.459190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:27.743 [2024-07-15 20:49:49.459204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4940, cid 0, qid 0 00:13:27.743 [2024-07-15 20:49:49.459242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.743 [2024-07-15 20:49:49.459248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.743 [2024-07-15 20:49:49.459252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4940) on tqpair=0x5a32c0 00:13:27.743 [2024-07-15 20:49:49.459262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a32c0) 00:13:27.743 [2024-07-15 20:49:49.459275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.743 [2024-07-15 20:49:49.459281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5a32c0) 00:13:27.743 [2024-07-15 20:49:49.459293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.743 [2024-07-15 20:49:49.459299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.743 [2024-07-15 20:49:49.459306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5a32c0) 00:13:27.743 [2024-07-15 20:49:49.459312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.743 [2024-07-15 20:49:49.459318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.744 [2024-07-15 20:49:49.459330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.744 [2024-07-15 20:49:49.459335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:13:27.744 [2024-07-15 20:49:49.459346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:27.744 [2024-07-15 20:49:49.459352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a32c0) 00:13:27.744 [2024-07-15 20:49:49.459362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.744 [2024-07-15 20:49:49.459376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4940, cid 0, qid 0 00:13:27.744 [2024-07-15 20:49:49.459381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4ac0, cid 1, qid 0 00:13:27.744 [2024-07-15 20:49:49.459385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4c40, cid 2, qid 0 00:13:27.744 [2024-07-15 20:49:49.459390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.744 [2024-07-15 20:49:49.459394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4f40, cid 4, qid 0 00:13:27.744 [2024-07-15 20:49:49.459468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.744 [2024-07-15 20:49:49.459473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.744 [2024-07-15 20:49:49.459477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4f40) on tqpair=0x5a32c0 00:13:27.744 [2024-07-15 20:49:49.459486] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:13:27.744 [2024-07-15 20:49:49.459494] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:13:27.744 [2024-07-15 20:49:49.459503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a32c0) 00:13:27.744 [2024-07-15 20:49:49.459512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.744 [2024-07-15 20:49:49.459525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4f40, cid 4, qid 0 00:13:27.744 [2024-07-15 20:49:49.459564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:27.744 [2024-07-15 20:49:49.459570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:27.744 [2024-07-15 20:49:49.459573] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459577] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a32c0): datao=0, datal=4096, cccid=4 00:13:27.744 [2024-07-15 20:49:49.459582] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e4f40) on tqpair(0x5a32c0): expected_datao=0, payload_size=4096 00:13:27.744 [2024-07-15 20:49:49.459586] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459592] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459596] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459603] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.744 [2024-07-15 20:49:49.459609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.744 [2024-07-15 20:49:49.459612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4f40) on tqpair=0x5a32c0 00:13:27.744 [2024-07-15 20:49:49.459627] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:13:27.744 [2024-07-15 20:49:49.459651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a32c0) 00:13:27.744 [2024-07-15 20:49:49.459661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.744 [2024-07-15 20:49:49.459668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5a32c0) 00:13:27.744 [2024-07-15 20:49:49.459681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:27.744 [2024-07-15 20:49:49.459698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4f40, cid 4, qid 0 00:13:27.744 [2024-07-15 20:49:49.459703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e50c0, cid 5, qid 0 00:13:27.744 [2024-07-15 20:49:49.459776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:27.744 [2024-07-15 20:49:49.459782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:27.744 [2024-07-15 20:49:49.459786] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459789] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a32c0): datao=0, datal=1024, cccid=4 00:13:27.744 [2024-07-15 20:49:49.459794] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e4f40) on tqpair(0x5a32c0): expected_datao=0, payload_size=1024 00:13:27.744 [2024-07-15 20:49:49.459799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459805] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459808] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.744 [2024-07-15 20:49:49.459819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.744 [2024-07-15 20:49:49.459822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459826] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e50c0) on tqpair=0x5a32c0 00:13:27.744 [2024-07-15 20:49:49.459839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.744 [2024-07-15 20:49:49.459845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.744 [2024-07-15 20:49:49.459849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4f40) on tqpair=0x5a32c0 00:13:27.744 [2024-07-15 20:49:49.459861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a32c0) 00:13:27.744 [2024-07-15 20:49:49.459871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.744 [2024-07-15 20:49:49.459887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4f40, cid 4, qid 0 00:13:27.744 [2024-07-15 20:49:49.459936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:27.744 [2024-07-15 20:49:49.459942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:27.744 [2024-07-15 20:49:49.459945] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459949] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a32c0): datao=0, datal=3072, cccid=4 00:13:27.744 [2024-07-15 20:49:49.459954] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e4f40) on tqpair(0x5a32c0): expected_datao=0, payload_size=3072 00:13:27.744 [2024-07-15 20:49:49.459958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459964] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459968] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.744 [2024-07-15 20:49:49.459980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.744 [2024-07-15 20:49:49.459984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4f40) on tqpair=0x5a32c0 00:13:27.744 [2024-07-15 20:49:49.459995] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.459999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a32c0) 00:13:27.744 [2024-07-15 20:49:49.460005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.744 [2024-07-15 20:49:49.460021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4f40, cid 4, qid 0 00:13:27.744 [2024-07-15 20:49:49.460065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:27.744 [2024-07-15 20:49:49.460071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:27.744 [2024-07-15 20:49:49.460075] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.460079] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a32c0): datao=0, datal=8, cccid=4 00:13:27.744 [2024-07-15 20:49:49.460084] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e4f40) on tqpair(0x5a32c0): expected_datao=0, payload_size=8 00:13:27.744 [2024-07-15 20:49:49.460088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.460094] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.460097] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:27.744 [2024-07-15 20:49:49.460108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.744 [2024-07-15 20:49:49.460114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.744 ===================================================== 00:13:27.744 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:27.744 ===================================================== 00:13:27.744 Controller Capabilities/Features 00:13:27.744 ================================ 00:13:27.744 Vendor ID: 0000 00:13:27.744 Subsystem Vendor ID: 0000 00:13:27.744 Serial Number: .................... 00:13:27.744 Model Number: ........................................ 00:13:27.744 Firmware Version: 24.09 00:13:27.744 Recommended Arb Burst: 0 00:13:27.744 IEEE OUI Identifier: 00 00 00 00:13:27.744 Multi-path I/O 00:13:27.744 May have multiple subsystem ports: No 00:13:27.744 May have multiple controllers: No 00:13:27.744 Associated with SR-IOV VF: No 00:13:27.744 Max Data Transfer Size: 131072 00:13:27.744 Max Number of Namespaces: 0 00:13:27.744 Max Number of I/O Queues: 1024 00:13:27.744 NVMe Specification Version (VS): 1.3 00:13:27.744 NVMe Specification Version (Identify): 1.3 00:13:27.744 Maximum Queue Entries: 128 00:13:27.744 Contiguous Queues Required: Yes 00:13:27.744 Arbitration Mechanisms Supported 00:13:27.744 Weighted Round Robin: Not Supported 00:13:27.744 Vendor Specific: Not Supported 00:13:27.744 Reset Timeout: 15000 ms 00:13:27.744 Doorbell Stride: 4 bytes 00:13:27.744 NVM Subsystem Reset: Not Supported 00:13:27.744 Command Sets Supported 00:13:27.744 NVM Command Set: Supported 00:13:27.744 Boot Partition: Not Supported 00:13:27.744 Memory Page Size Minimum: 4096 bytes 00:13:27.744 Memory Page Size Maximum: 4096 bytes 00:13:27.744 Persistent Memory Region: Not Supported 00:13:27.744 Optional Asynchronous Events Supported 00:13:27.744 Namespace Attribute Notices: Not Supported 00:13:27.745 Firmware Activation Notices: Not Supported 00:13:27.745 ANA Change Notices: Not Supported 00:13:27.745 PLE Aggregate Log Change Notices: Not Supported 00:13:27.745 LBA Status Info Alert Notices: Not Supported 00:13:27.745 EGE Aggregate Log Change Notices: Not Supported 00:13:27.745 Normal NVM Subsystem Shutdown event: Not Supported 00:13:27.745 Zone Descriptor Change Notices: Not Supported 00:13:27.745 Discovery Log Change Notices: Supported 00:13:27.745 Controller Attributes 00:13:27.745 128-bit Host Identifier: Not Supported 00:13:27.745 Non-Operational Permissive Mode: Not Supported 00:13:27.745 NVM Sets: Not Supported 00:13:27.745 Read Recovery Levels: Not Supported 00:13:27.745 Endurance Groups: Not Supported 00:13:27.745 Predictable Latency Mode: Not Supported 00:13:27.745 Traffic Based Keep ALive: Not Supported 00:13:27.745 Namespace Granularity: Not Supported 00:13:27.745 SQ Associations: Not Supported 00:13:27.745 UUID List: Not Supported 00:13:27.745 Multi-Domain Subsystem: Not Supported 00:13:27.745 Fixed Capacity Management: Not Supported 00:13:27.745 Variable Capacity Management: Not Supported 00:13:27.745 Delete Endurance Group: Not Supported 00:13:27.745 Delete NVM Set: Not Supported 00:13:27.745 Extended LBA Formats Supported: Not Supported 00:13:27.745 Flexible Data Placement Supported: Not Supported 00:13:27.745 00:13:27.745 Controller Memory Buffer Support 00:13:27.745 ================================ 00:13:27.745 Supported: No 00:13:27.745 00:13:27.745 Persistent Memory Region Support 00:13:27.745 ================================ 00:13:27.745 Supported: No 00:13:27.745 00:13:27.745 Admin Command Set Attributes 00:13:27.745 ============================ 00:13:27.745 Security Send/Receive: Not Supported 00:13:27.745 Format NVM: Not Supported 00:13:27.745 Firmware Activate/Download: Not Supported 00:13:27.745 Namespace Management: Not Supported 00:13:27.745 Device Self-Test: Not Supported 00:13:27.745 Directives: Not Supported 00:13:27.745 NVMe-MI: Not Supported 00:13:27.745 Virtualization Management: Not Supported 00:13:27.745 Doorbell Buffer Config: Not Supported 00:13:27.745 Get LBA Status Capability: Not Supported 00:13:27.745 Command & Feature Lockdown Capability: Not Supported 00:13:27.745 Abort Command Limit: 1 00:13:27.745 Async Event Request Limit: 4 00:13:27.745 Number of Firmware Slots: N/A 00:13:27.745 Firmware Slot 1 Read-Only: N/A 00:13:27.745 Firmware Activation Without Reset: N/A 00:13:27.745 Multiple Update Detection Support: N/A 00:13:27.745 Firmware Update Granularity: No Information Provided 00:13:27.745 Per-Namespace SMART Log: No 00:13:27.745 Asymmetric Namespace Access Log Page: Not Supported 00:13:27.745 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:27.745 Command Effects Log Page: Not Supported 00:13:27.745 Get Log Page Extended Data: Supported 00:13:27.745 Telemetry Log Pages: Not Supported 00:13:27.745 Persistent Event Log Pages: Not Supported 00:13:27.745 Supported Log Pages Log Page: May Support 00:13:27.745 Commands Supported & Effects Log Page: Not Supported 00:13:27.745 Feature Identifiers & Effects Log Page:May Support 00:13:27.745 NVMe-MI Commands & Effects Log Page: May Support 00:13:27.745 Data Area 4 for Telemetry Log: Not Supported 00:13:27.745 Error Log Page Entries Supported: 128 00:13:27.745 Keep Alive: Not Supported 00:13:27.745 00:13:27.745 NVM Command Set Attributes 00:13:27.745 ========================== 00:13:27.745 Submission Queue Entry Size 00:13:27.745 Max: 1 00:13:27.745 Min: 1 00:13:27.745 Completion Queue Entry Size 00:13:27.745 Max: 1 00:13:27.745 Min: 1 00:13:27.745 Number of Namespaces: 0 00:13:27.745 Compare Command: Not Supported 00:13:27.745 Write Uncorrectable Command: Not Supported 00:13:27.745 Dataset Management Command: Not Supported 00:13:27.745 Write Zeroes Command: Not Supported 00:13:27.745 Set Features Save Field: Not Supported 00:13:27.745 Reservations: Not Supported 00:13:27.745 Timestamp: Not Supported 00:13:27.745 Copy: Not Supported 00:13:27.745 Volatile Write Cache: Not Present 00:13:27.745 Atomic Write Unit (Normal): 1 00:13:27.745 Atomic Write Unit (PFail): 1 00:13:27.745 Atomic Compare & Write Unit: 1 00:13:27.745 Fused Compare & Write: Supported 00:13:27.745 Scatter-Gather List 00:13:27.745 SGL Command Set: Supported 00:13:27.745 SGL Keyed: Supported 00:13:27.745 SGL Bit Bucket Descriptor: Not Supported 00:13:27.745 SGL Metadata Pointer: Not Supported 00:13:27.745 Oversized SGL: Not Supported 00:13:27.745 SGL Metadata Address: Not Supported 00:13:27.745 SGL Offset: Supported 00:13:27.745 Transport SGL Data Block: Not Supported 00:13:27.745 Replay Protected Memory Block: Not Supported 00:13:27.745 00:13:27.745 Firmware Slot Information 00:13:27.745 ========================= 00:13:27.745 Active slot: 0 00:13:27.745 00:13:27.745 00:13:27.745 Error Log 00:13:27.745 ========= 00:13:27.745 00:13:27.745 Active Namespaces 00:13:27.745 ================= 00:13:27.745 Discovery Log Page 00:13:27.745 ================== 00:13:27.745 Generation Counter: 2 00:13:27.745 Number of Records: 2 00:13:27.745 Record Format: 0 00:13:27.745 00:13:27.745 Discovery Log Entry 0 00:13:27.745 ---------------------- 00:13:27.745 Transport Type: 3 (TCP) 00:13:27.745 Address Family: 1 (IPv4) 00:13:27.745 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:27.745 Entry Flags: 00:13:27.745 Duplicate Returned Information: 1 00:13:27.745 Explicit Persistent Connection Support for Discovery: 1 00:13:27.745 Transport Requirements: 00:13:27.745 Secure Channel: Not Required 00:13:27.745 Port ID: 0 (0x0000) 00:13:27.745 Controller ID: 65535 (0xffff) 00:13:27.745 Admin Max SQ Size: 128 00:13:27.745 Transport Service Identifier: 4420 00:13:27.745 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:27.745 Transport Address: 10.0.0.2 00:13:27.745 Discovery Log Entry 1 00:13:27.745 ---------------------- 00:13:27.745 Transport Type: 3 (TCP) 00:13:27.745 Address Family: 1 (IPv4) 00:13:27.745 Subsystem Type: 2 (NVM Subsystem) 00:13:27.745 Entry Flags: 00:13:27.745 Duplicate Returned Information: 0 00:13:27.745 Explicit Persistent Connection Support for Discovery: 0 00:13:27.745 Transport Requirements: 00:13:27.745 Secure Channel: Not Required 00:13:27.745 Port ID: 0 (0x0000) 00:13:27.745 Controller ID: 65535 (0xffff) 00:13:27.745 Admin Max SQ Size: 128 00:13:27.745 Transport Service Identifier: 4420 00:13:27.745 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:27.745 Transport Address: 10.0.0.2 [2024-07-15 20:49:49.460118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.745 [2024-07-15 20:49:49.460122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4f40) on tqpair=0x5a32c0 00:13:27.745 [2024-07-15 20:49:49.460217] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:13:27.745 [2024-07-15 20:49:49.460227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4940) on tqpair=0x5a32c0 00:13:27.745 [2024-07-15 20:49:49.460233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.745 [2024-07-15 20:49:49.460239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4ac0) on tqpair=0x5a32c0 00:13:27.745 [2024-07-15 20:49:49.460243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.745 [2024-07-15 20:49:49.460248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4c40) on tqpair=0x5a32c0 00:13:27.745 [2024-07-15 20:49:49.460253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.745 [2024-07-15 20:49:49.460258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.745 [2024-07-15 20:49:49.460262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:27.745 [2024-07-15 20:49:49.460270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.745 [2024-07-15 20:49:49.460274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.745 [2024-07-15 20:49:49.460277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.745 [2024-07-15 20:49:49.460283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.745 [2024-07-15 20:49:49.460298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.745 [2024-07-15 20:49:49.460341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.745 [2024-07-15 20:49:49.460347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.745 [2024-07-15 20:49:49.460350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.745 [2024-07-15 20:49:49.460354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.745 [2024-07-15 20:49:49.460360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.745 [2024-07-15 20:49:49.460364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.745 [2024-07-15 20:49:49.460368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.745 [2024-07-15 20:49:49.460374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.745 [2024-07-15 20:49:49.460389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.745 [2024-07-15 20:49:49.460438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.745 [2024-07-15 20:49:49.460444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.745 [2024-07-15 20:49:49.460447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.745 [2024-07-15 20:49:49.460451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.745 [2024-07-15 20:49:49.460456] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:13:27.746 [2024-07-15 20:49:49.460460] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:13:27.746 [2024-07-15 20:49:49.460468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.460483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.460495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.746 [2024-07-15 20:49:49.460531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.746 [2024-07-15 20:49:49.460537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.746 [2024-07-15 20:49:49.460540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.746 [2024-07-15 20:49:49.460553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.460566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.460578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.746 [2024-07-15 20:49:49.460614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.746 [2024-07-15 20:49:49.460620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.746 [2024-07-15 20:49:49.460624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.746 [2024-07-15 20:49:49.460636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.460649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.460661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.746 [2024-07-15 20:49:49.460692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.746 [2024-07-15 20:49:49.460698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.746 [2024-07-15 20:49:49.460701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.746 [2024-07-15 20:49:49.460713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.460727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.460739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.746 [2024-07-15 20:49:49.460773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.746 [2024-07-15 20:49:49.460778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.746 [2024-07-15 20:49:49.460782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.746 [2024-07-15 20:49:49.460794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.460807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.460819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.746 [2024-07-15 20:49:49.460853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.746 [2024-07-15 20:49:49.460859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.746 [2024-07-15 20:49:49.460862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.746 [2024-07-15 20:49:49.460874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.460888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.460899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.746 [2024-07-15 20:49:49.460933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.746 [2024-07-15 20:49:49.460939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.746 [2024-07-15 20:49:49.460942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.746 [2024-07-15 20:49:49.460954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.460962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.460968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.460980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.746 [2024-07-15 20:49:49.461013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.746 [2024-07-15 20:49:49.461019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.746 [2024-07-15 20:49:49.461023] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.746 [2024-07-15 20:49:49.461034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.461048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.461060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.746 [2024-07-15 20:49:49.461094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.746 [2024-07-15 20:49:49.461099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.746 [2024-07-15 20:49:49.461103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.746 [2024-07-15 20:49:49.461115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461119] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.461128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.461140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.746 [2024-07-15 20:49:49.461190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.746 [2024-07-15 20:49:49.461196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.746 [2024-07-15 20:49:49.461200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.746 [2024-07-15 20:49:49.461212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.461225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.461238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.746 [2024-07-15 20:49:49.461274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.746 [2024-07-15 20:49:49.461280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.746 [2024-07-15 20:49:49.461284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.746 [2024-07-15 20:49:49.461296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.461309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.461321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.746 [2024-07-15 20:49:49.461352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.746 [2024-07-15 20:49:49.461358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.746 [2024-07-15 20:49:49.461362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.746 [2024-07-15 20:49:49.461374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.746 [2024-07-15 20:49:49.461381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.746 [2024-07-15 20:49:49.461387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.746 [2024-07-15 20:49:49.461399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.747 [2024-07-15 20:49:49.461432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.747 [2024-07-15 20:49:49.461438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.747 [2024-07-15 20:49:49.461442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.747 [2024-07-15 20:49:49.461454] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.747 [2024-07-15 20:49:49.461467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.747 [2024-07-15 20:49:49.461479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.747 [2024-07-15 20:49:49.461513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.747 [2024-07-15 20:49:49.461518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.747 [2024-07-15 20:49:49.461522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.747 [2024-07-15 20:49:49.461534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.747 [2024-07-15 20:49:49.461547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.747 [2024-07-15 20:49:49.461559] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.747 [2024-07-15 20:49:49.461590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.747 [2024-07-15 20:49:49.461596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.747 [2024-07-15 20:49:49.461600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.747 [2024-07-15 20:49:49.461612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.747 [2024-07-15 20:49:49.461625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.747 [2024-07-15 20:49:49.461637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.747 [2024-07-15 20:49:49.461671] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.747 [2024-07-15 20:49:49.461677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.747 [2024-07-15 20:49:49.461680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.747 [2024-07-15 20:49:49.461692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.747 [2024-07-15 20:49:49.461706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.747 [2024-07-15 20:49:49.461718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.747 [2024-07-15 20:49:49.461752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.747 [2024-07-15 20:49:49.461757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.747 [2024-07-15 20:49:49.461761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.747 [2024-07-15 20:49:49.461772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.747 [2024-07-15 20:49:49.461786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.747 [2024-07-15 20:49:49.461798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.747 [2024-07-15 20:49:49.461832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.747 [2024-07-15 20:49:49.461838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.747 [2024-07-15 20:49:49.461841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.747 [2024-07-15 20:49:49.461854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.747 [2024-07-15 20:49:49.461867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.747 [2024-07-15 20:49:49.461879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.747 [2024-07-15 20:49:49.461910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.747 [2024-07-15 20:49:49.461916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.747 [2024-07-15 20:49:49.461919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.747 [2024-07-15 20:49:49.461931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.461939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.747 [2024-07-15 20:49:49.461945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.747 [2024-07-15 20:49:49.461957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.747 [2024-07-15 20:49:49.461999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.747 [2024-07-15 20:49:49.462004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.747 [2024-07-15 20:49:49.462008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.462012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.747 [2024-07-15 20:49:49.462020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.462024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.462028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.747 [2024-07-15 20:49:49.462034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.747 [2024-07-15 20:49:49.462046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.747 [2024-07-15 20:49:49.462082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.747 [2024-07-15 20:49:49.462088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.747 [2024-07-15 20:49:49.462092] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.462095] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.747 [2024-07-15 20:49:49.462104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.462108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.462111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.747 [2024-07-15 20:49:49.462117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.747 [2024-07-15 20:49:49.462129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.747 [2024-07-15 20:49:49.462163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.747 [2024-07-15 20:49:49.469187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.747 [2024-07-15 20:49:49.469193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.469197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.747 [2024-07-15 20:49:49.469208] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.469212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.469216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a32c0) 00:13:27.747 [2024-07-15 20:49:49.469223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:27.747 [2024-07-15 20:49:49.469240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e4dc0, cid 3, qid 0 00:13:27.747 [2024-07-15 20:49:49.469281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:27.747 [2024-07-15 20:49:49.469286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:27.747 [2024-07-15 20:49:49.469290] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:27.747 [2024-07-15 20:49:49.469294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e4dc0) on tqpair=0x5a32c0 00:13:27.747 [2024-07-15 20:49:49.469300] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:13:27.747 00:13:27.747 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:27.747 [2024-07-15 20:49:49.513979] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:27.748 [2024-07-15 20:49:49.514045] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74211 ] 00:13:28.012 [2024-07-15 20:49:49.650556] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:13:28.012 [2024-07-15 20:49:49.650610] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:28.012 [2024-07-15 20:49:49.650615] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:28.012 [2024-07-15 20:49:49.650628] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:28.012 [2024-07-15 20:49:49.650635] sock.c: 347:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:28.012 [2024-07-15 20:49:49.650754] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:13:28.012 [2024-07-15 20:49:49.650791] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19c42c0 0 00:13:28.012 [2024-07-15 20:49:49.658182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:28.012 [2024-07-15 20:49:49.658201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:28.012 [2024-07-15 20:49:49.658206] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:28.012 [2024-07-15 20:49:49.658210] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:28.012 [2024-07-15 20:49:49.658255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.658260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.658265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c42c0) 00:13:28.012 [2024-07-15 20:49:49.658276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:28.012 [2024-07-15 20:49:49.658298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05940, cid 0, qid 0 00:13:28.012 [2024-07-15 20:49:49.666182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.012 [2024-07-15 20:49:49.666201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.012 [2024-07-15 20:49:49.666205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05940) on tqpair=0x19c42c0 00:13:28.012 [2024-07-15 20:49:49.666220] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:28.012 [2024-07-15 20:49:49.666227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:13:28.012 [2024-07-15 20:49:49.666234] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:13:28.012 [2024-07-15 20:49:49.666250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c42c0) 00:13:28.012 [2024-07-15 20:49:49.666266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.012 [2024-07-15 20:49:49.666289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05940, cid 0, qid 0 00:13:28.012 [2024-07-15 20:49:49.666333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.012 [2024-07-15 20:49:49.666339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.012 [2024-07-15 20:49:49.666343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05940) on tqpair=0x19c42c0 00:13:28.012 [2024-07-15 20:49:49.666352] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:13:28.012 [2024-07-15 20:49:49.666359] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:13:28.012 [2024-07-15 20:49:49.666366] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c42c0) 00:13:28.012 [2024-07-15 20:49:49.666380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.012 [2024-07-15 20:49:49.666394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05940, cid 0, qid 0 00:13:28.012 [2024-07-15 20:49:49.666428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.012 [2024-07-15 20:49:49.666434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.012 [2024-07-15 20:49:49.666437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05940) on tqpair=0x19c42c0 00:13:28.012 [2024-07-15 20:49:49.666446] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:13:28.012 [2024-07-15 20:49:49.666454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:13:28.012 [2024-07-15 20:49:49.666460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c42c0) 00:13:28.012 [2024-07-15 20:49:49.666473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.012 [2024-07-15 20:49:49.666486] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05940, cid 0, qid 0 00:13:28.012 [2024-07-15 20:49:49.666521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.012 [2024-07-15 20:49:49.666527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.012 [2024-07-15 20:49:49.666530] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05940) on tqpair=0x19c42c0 00:13:28.012 [2024-07-15 20:49:49.666539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:28.012 [2024-07-15 20:49:49.666547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c42c0) 00:13:28.012 [2024-07-15 20:49:49.666561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.012 [2024-07-15 20:49:49.666573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05940, cid 0, qid 0 00:13:28.012 [2024-07-15 20:49:49.666608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.012 [2024-07-15 20:49:49.666614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.012 [2024-07-15 20:49:49.666617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05940) on tqpair=0x19c42c0 00:13:28.012 [2024-07-15 20:49:49.666626] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:13:28.012 [2024-07-15 20:49:49.666631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:13:28.012 [2024-07-15 20:49:49.666638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:28.012 [2024-07-15 20:49:49.666743] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:13:28.012 [2024-07-15 20:49:49.666747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:28.012 [2024-07-15 20:49:49.666755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c42c0) 00:13:28.012 [2024-07-15 20:49:49.666768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.012 [2024-07-15 20:49:49.666781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05940, cid 0, qid 0 00:13:28.012 [2024-07-15 20:49:49.666816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.012 [2024-07-15 20:49:49.666821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.012 [2024-07-15 20:49:49.666825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05940) on tqpair=0x19c42c0 00:13:28.012 [2024-07-15 20:49:49.666833] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:28.012 [2024-07-15 20:49:49.666842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c42c0) 00:13:28.012 [2024-07-15 20:49:49.666855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.012 [2024-07-15 20:49:49.666868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05940, cid 0, qid 0 00:13:28.012 [2024-07-15 20:49:49.666908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.012 [2024-07-15 20:49:49.666913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.012 [2024-07-15 20:49:49.666917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05940) on tqpair=0x19c42c0 00:13:28.012 [2024-07-15 20:49:49.666925] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:28.012 [2024-07-15 20:49:49.666930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:13:28.012 [2024-07-15 20:49:49.666937] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:13:28.012 [2024-07-15 20:49:49.666946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:13:28.012 [2024-07-15 20:49:49.666955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.012 [2024-07-15 20:49:49.666958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c42c0) 00:13:28.013 [2024-07-15 20:49:49.666964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.013 [2024-07-15 20:49:49.666978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05940, cid 0, qid 0 00:13:28.013 [2024-07-15 20:49:49.667041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:28.013 [2024-07-15 20:49:49.667047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:28.013 [2024-07-15 20:49:49.667050] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667054] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c42c0): datao=0, datal=4096, cccid=0 00:13:28.013 [2024-07-15 20:49:49.667059] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a05940) on tqpair(0x19c42c0): expected_datao=0, payload_size=4096 00:13:28.013 [2024-07-15 20:49:49.667064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667071] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667075] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.013 [2024-07-15 20:49:49.667088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.013 [2024-07-15 20:49:49.667092] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05940) on tqpair=0x19c42c0 00:13:28.013 [2024-07-15 20:49:49.667103] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:13:28.013 [2024-07-15 20:49:49.667107] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:13:28.013 [2024-07-15 20:49:49.667112] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:13:28.013 [2024-07-15 20:49:49.667116] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:13:28.013 [2024-07-15 20:49:49.667121] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:13:28.013 [2024-07-15 20:49:49.667125] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:13:28.013 [2024-07-15 20:49:49.667134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:13:28.013 [2024-07-15 20:49:49.667140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c42c0) 00:13:28.013 [2024-07-15 20:49:49.667154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:28.013 [2024-07-15 20:49:49.667176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05940, cid 0, qid 0 00:13:28.013 [2024-07-15 20:49:49.667217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.013 [2024-07-15 20:49:49.667222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.013 [2024-07-15 20:49:49.667226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05940) on tqpair=0x19c42c0 00:13:28.013 [2024-07-15 20:49:49.667237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c42c0) 00:13:28.013 [2024-07-15 20:49:49.667250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.013 [2024-07-15 20:49:49.667256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19c42c0) 00:13:28.013 [2024-07-15 20:49:49.667268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.013 [2024-07-15 20:49:49.667274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19c42c0) 00:13:28.013 [2024-07-15 20:49:49.667287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.013 [2024-07-15 20:49:49.667292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.013 [2024-07-15 20:49:49.667305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.013 [2024-07-15 20:49:49.667310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:28.013 [2024-07-15 20:49:49.667321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:28.013 [2024-07-15 20:49:49.667328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c42c0) 00:13:28.013 [2024-07-15 20:49:49.667337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.013 [2024-07-15 20:49:49.667352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05940, cid 0, qid 0 00:13:28.013 [2024-07-15 20:49:49.667357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05ac0, cid 1, qid 0 00:13:28.013 [2024-07-15 20:49:49.667361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05c40, cid 2, qid 0 00:13:28.013 [2024-07-15 20:49:49.667366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.013 [2024-07-15 20:49:49.667370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05f40, cid 4, qid 0 00:13:28.013 [2024-07-15 20:49:49.667438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.013 [2024-07-15 20:49:49.667444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.013 [2024-07-15 20:49:49.667447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05f40) on tqpair=0x19c42c0 00:13:28.013 [2024-07-15 20:49:49.667456] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:13:28.013 [2024-07-15 20:49:49.667464] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:28.013 [2024-07-15 20:49:49.667471] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:13:28.013 [2024-07-15 20:49:49.667477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:28.013 [2024-07-15 20:49:49.667483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c42c0) 00:13:28.013 [2024-07-15 20:49:49.667496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:28.013 [2024-07-15 20:49:49.667510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05f40, cid 4, qid 0 00:13:28.013 [2024-07-15 20:49:49.667549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.013 [2024-07-15 20:49:49.667555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.013 [2024-07-15 20:49:49.667558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05f40) on tqpair=0x19c42c0 00:13:28.013 [2024-07-15 20:49:49.667612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:13:28.013 [2024-07-15 20:49:49.667621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:28.013 [2024-07-15 20:49:49.667628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667632] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c42c0) 00:13:28.013 [2024-07-15 20:49:49.667638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.013 [2024-07-15 20:49:49.667651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05f40, cid 4, qid 0 00:13:28.013 [2024-07-15 20:49:49.667694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:28.013 [2024-07-15 20:49:49.667700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:28.013 [2024-07-15 20:49:49.667704] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667707] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c42c0): datao=0, datal=4096, cccid=4 00:13:28.013 [2024-07-15 20:49:49.667712] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a05f40) on tqpair(0x19c42c0): expected_datao=0, payload_size=4096 00:13:28.013 [2024-07-15 20:49:49.667717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667723] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667727] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.013 [2024-07-15 20:49:49.667740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.013 [2024-07-15 20:49:49.667743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05f40) on tqpair=0x19c42c0 00:13:28.013 [2024-07-15 20:49:49.667758] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:13:28.013 [2024-07-15 20:49:49.667767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:13:28.013 [2024-07-15 20:49:49.667776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:13:28.013 [2024-07-15 20:49:49.667782] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c42c0) 00:13:28.013 [2024-07-15 20:49:49.667792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.013 [2024-07-15 20:49:49.667805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05f40, cid 4, qid 0 00:13:28.013 [2024-07-15 20:49:49.667859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:28.013 [2024-07-15 20:49:49.667864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:28.013 [2024-07-15 20:49:49.667868] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667871] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c42c0): datao=0, datal=4096, cccid=4 00:13:28.013 [2024-07-15 20:49:49.667876] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a05f40) on tqpair(0x19c42c0): expected_datao=0, payload_size=4096 00:13:28.013 [2024-07-15 20:49:49.667881] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667886] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667890] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:28.013 [2024-07-15 20:49:49.667897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.013 [2024-07-15 20:49:49.667903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.014 [2024-07-15 20:49:49.667906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.667910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05f40) on tqpair=0x19c42c0 00:13:28.014 [2024-07-15 20:49:49.667922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:28.014 [2024-07-15 20:49:49.667930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:28.014 [2024-07-15 20:49:49.667937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.667941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c42c0) 00:13:28.014 [2024-07-15 20:49:49.667946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.014 [2024-07-15 20:49:49.667959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05f40, cid 4, qid 0 00:13:28.014 [2024-07-15 20:49:49.668000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:28.014 [2024-07-15 20:49:49.668006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:28.014 [2024-07-15 20:49:49.668009] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668013] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c42c0): datao=0, datal=4096, cccid=4 00:13:28.014 [2024-07-15 20:49:49.668018] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a05f40) on tqpair(0x19c42c0): expected_datao=0, payload_size=4096 00:13:28.014 [2024-07-15 20:49:49.668022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668028] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668031] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.014 [2024-07-15 20:49:49.668044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.014 [2024-07-15 20:49:49.668048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05f40) on tqpair=0x19c42c0 00:13:28.014 [2024-07-15 20:49:49.668058] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:28.014 [2024-07-15 20:49:49.668066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:13:28.014 [2024-07-15 20:49:49.668074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:13:28.014 [2024-07-15 20:49:49.668080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:28.014 [2024-07-15 20:49:49.668086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:28.014 [2024-07-15 20:49:49.668091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:13:28.014 [2024-07-15 20:49:49.668096] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:13:28.014 [2024-07-15 20:49:49.668101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:13:28.014 [2024-07-15 20:49:49.668106] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:13:28.014 [2024-07-15 20:49:49.668120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c42c0) 00:13:28.014 [2024-07-15 20:49:49.668130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.014 [2024-07-15 20:49:49.668136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19c42c0) 00:13:28.014 [2024-07-15 20:49:49.668149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.014 [2024-07-15 20:49:49.668175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05f40, cid 4, qid 0 00:13:28.014 [2024-07-15 20:49:49.668181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a060c0, cid 5, qid 0 00:13:28.014 [2024-07-15 20:49:49.668227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.014 [2024-07-15 20:49:49.668232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.014 [2024-07-15 20:49:49.668236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05f40) on tqpair=0x19c42c0 00:13:28.014 [2024-07-15 20:49:49.668246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.014 [2024-07-15 20:49:49.668252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.014 [2024-07-15 20:49:49.668255] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a060c0) on tqpair=0x19c42c0 00:13:28.014 [2024-07-15 20:49:49.668268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19c42c0) 00:13:28.014 [2024-07-15 20:49:49.668278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.014 [2024-07-15 20:49:49.668291] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a060c0, cid 5, qid 0 00:13:28.014 [2024-07-15 20:49:49.668328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.014 [2024-07-15 20:49:49.668333] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.014 [2024-07-15 20:49:49.668337] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a060c0) on tqpair=0x19c42c0 00:13:28.014 [2024-07-15 20:49:49.668349] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19c42c0) 00:13:28.014 [2024-07-15 20:49:49.668359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.014 [2024-07-15 20:49:49.668372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a060c0, cid 5, qid 0 00:13:28.014 [2024-07-15 20:49:49.668412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.014 [2024-07-15 20:49:49.668418] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.014 [2024-07-15 20:49:49.668422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a060c0) on tqpair=0x19c42c0 00:13:28.014 [2024-07-15 20:49:49.668434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19c42c0) 00:13:28.014 [2024-07-15 20:49:49.668444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.014 [2024-07-15 20:49:49.668456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a060c0, cid 5, qid 0 00:13:28.014 [2024-07-15 20:49:49.668489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.014 [2024-07-15 20:49:49.668495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.014 [2024-07-15 20:49:49.668498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a060c0) on tqpair=0x19c42c0 00:13:28.014 [2024-07-15 20:49:49.668516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19c42c0) 00:13:28.014 [2024-07-15 20:49:49.668526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.014 [2024-07-15 20:49:49.668533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c42c0) 00:13:28.014 [2024-07-15 20:49:49.668543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.014 [2024-07-15 20:49:49.668549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19c42c0) 00:13:28.014 [2024-07-15 20:49:49.668558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.014 [2024-07-15 20:49:49.668568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19c42c0) 00:13:28.014 [2024-07-15 20:49:49.668578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.014 [2024-07-15 20:49:49.668591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a060c0, cid 5, qid 0 00:13:28.014 [2024-07-15 20:49:49.668597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05f40, cid 4, qid 0 00:13:28.014 [2024-07-15 20:49:49.668601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a06240, cid 6, qid 0 00:13:28.014 [2024-07-15 20:49:49.668605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a063c0, cid 7, qid 0 00:13:28.014 [2024-07-15 20:49:49.668708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:28.014 [2024-07-15 20:49:49.668713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:28.014 [2024-07-15 20:49:49.668717] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668721] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c42c0): datao=0, datal=8192, cccid=5 00:13:28.014 [2024-07-15 20:49:49.668725] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a060c0) on tqpair(0x19c42c0): expected_datao=0, payload_size=8192 00:13:28.014 [2024-07-15 20:49:49.668730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668745] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668749] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:28.014 [2024-07-15 20:49:49.668760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:28.014 [2024-07-15 20:49:49.668763] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668767] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c42c0): datao=0, datal=512, cccid=4 00:13:28.014 [2024-07-15 20:49:49.668772] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a05f40) on tqpair(0x19c42c0): expected_datao=0, payload_size=512 00:13:28.014 [2024-07-15 20:49:49.668776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668782] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668786] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:28.014 [2024-07-15 20:49:49.668796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:28.014 [2024-07-15 20:49:49.668800] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:28.014 [2024-07-15 20:49:49.668804] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c42c0): datao=0, datal=512, cccid=6 00:13:28.014 [2024-07-15 20:49:49.668808] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a06240) on tqpair(0x19c42c0): expected_datao=0, payload_size=512 00:13:28.015 [2024-07-15 20:49:49.668813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.015 [2024-07-15 20:49:49.668818] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:28.015 [2024-07-15 20:49:49.668822] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:28.015 [2024-07-15 20:49:49.668827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:28.015 [2024-07-15 20:49:49.668833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:28.015 [2024-07-15 20:49:49.668836] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:28.015 [2024-07-15 20:49:49.668840] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c42c0): datao=0, datal=4096, cccid=7 00:13:28.015 [2024-07-15 20:49:49.668844] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a063c0) on tqpair(0x19c42c0): expected_datao=0, payload_size=4096 00:13:28.015 [2024-07-15 20:49:49.668849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.015 [2024-07-15 20:49:49.668855] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:28.015 [2024-07-15 20:49:49.668859] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:28.015 [2024-07-15 20:49:49.668866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.015 [2024-07-15 20:49:49.668871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.015 [2024-07-15 20:49:49.668874] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.015 [2024-07-15 20:49:49.668878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a060c0) on tqpair=0x19c42c0 00:13:28.015 [2024-07-15 20:49:49.668893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.015 [2024-07-15 20:49:49.668898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.015 [2024-07-15 20:49:49.668902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.015 [2024-07-15 20:49:49.668905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05f40) on tqpair=0x19c42c0 00:13:28.015 [2024-07-15 20:49:49.668917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.015 [2024-07-15 20:49:49.668923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.015 [2024-07-15 20:49:49.668926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.015 [2024-07-15 20:49:49.668930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a06240) on tqpair=0x19c42c0 00:13:28.015 [2024-07-15 20:49:49.668937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.015 [2024-07-15 20:49:49.668942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.015 [2024-07-15 20:49:49.668946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.015 ===================================================== 00:13:28.015 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:28.015 ===================================================== 00:13:28.015 Controller Capabilities/Features 00:13:28.015 ================================ 00:13:28.015 Vendor ID: 8086 00:13:28.015 Subsystem Vendor ID: 8086 00:13:28.015 Serial Number: SPDK00000000000001 00:13:28.015 Model Number: SPDK bdev Controller 00:13:28.015 Firmware Version: 24.09 00:13:28.015 Recommended Arb Burst: 6 00:13:28.015 IEEE OUI Identifier: e4 d2 5c 00:13:28.015 Multi-path I/O 00:13:28.015 May have multiple subsystem ports: Yes 00:13:28.015 May have multiple controllers: Yes 00:13:28.015 Associated with SR-IOV VF: No 00:13:28.015 Max Data Transfer Size: 131072 00:13:28.015 Max Number of Namespaces: 32 00:13:28.015 Max Number of I/O Queues: 127 00:13:28.015 NVMe Specification Version (VS): 1.3 00:13:28.015 NVMe Specification Version (Identify): 1.3 00:13:28.015 Maximum Queue Entries: 128 00:13:28.015 Contiguous Queues Required: Yes 00:13:28.015 Arbitration Mechanisms Supported 00:13:28.015 Weighted Round Robin: Not Supported 00:13:28.015 Vendor Specific: Not Supported 00:13:28.015 Reset Timeout: 15000 ms 00:13:28.015 Doorbell Stride: 4 bytes 00:13:28.015 NVM Subsystem Reset: Not Supported 00:13:28.015 Command Sets Supported 00:13:28.015 NVM Command Set: Supported 00:13:28.015 Boot Partition: Not Supported 00:13:28.015 Memory Page Size Minimum: 4096 bytes 00:13:28.015 Memory Page Size Maximum: 4096 bytes 00:13:28.015 Persistent Memory Region: Not Supported 00:13:28.015 Optional Asynchronous Events Supported 00:13:28.015 Namespace Attribute Notices: Supported 00:13:28.015 Firmware Activation Notices: Not Supported 00:13:28.015 ANA Change Notices: Not Supported 00:13:28.015 PLE Aggregate Log Change Notices: Not Supported 00:13:28.015 LBA Status Info Alert Notices: Not Supported 00:13:28.015 EGE Aggregate Log Change Notices: Not Supported 00:13:28.015 Normal NVM Subsystem Shutdown event: Not Supported 00:13:28.015 Zone Descriptor Change Notices: Not Supported 00:13:28.015 Discovery Log Change Notices: Not Supported 00:13:28.015 Controller Attributes 00:13:28.015 128-bit Host Identifier: Supported 00:13:28.015 Non-Operational Permissive Mode: Not Supported 00:13:28.015 NVM Sets: Not Supported 00:13:28.015 Read Recovery Levels: Not Supported 00:13:28.015 Endurance Groups: Not Supported 00:13:28.015 Predictable Latency Mode: Not Supported 00:13:28.015 Traffic Based Keep ALive: Not Supported 00:13:28.015 Namespace Granularity: Not Supported 00:13:28.015 SQ Associations: Not Supported 00:13:28.015 UUID List: Not Supported 00:13:28.015 Multi-Domain Subsystem: Not Supported 00:13:28.015 Fixed Capacity Management: Not Supported 00:13:28.015 Variable Capacity Management: Not Supported 00:13:28.015 Delete Endurance Group: Not Supported 00:13:28.015 Delete NVM Set: Not Supported 00:13:28.015 Extended LBA Formats Supported: Not Supported 00:13:28.015 Flexible Data Placement Supported: Not Supported 00:13:28.015 00:13:28.015 Controller Memory Buffer Support 00:13:28.015 ================================ 00:13:28.015 Supported: No 00:13:28.015 00:13:28.015 Persistent Memory Region Support 00:13:28.015 ================================ 00:13:28.015 Supported: No 00:13:28.015 00:13:28.015 Admin Command Set Attributes 00:13:28.015 ============================ 00:13:28.015 Security Send/Receive: Not Supported 00:13:28.015 Format NVM: Not Supported 00:13:28.015 Firmware Activate/Download: Not Supported 00:13:28.015 Namespace Management: Not Supported 00:13:28.015 Device Self-Test: Not Supported 00:13:28.015 Directives: Not Supported 00:13:28.015 NVMe-MI: Not Supported 00:13:28.015 Virtualization Management: Not Supported 00:13:28.015 Doorbell Buffer Config: Not Supported 00:13:28.015 Get LBA Status Capability: Not Supported 00:13:28.015 Command & Feature Lockdown Capability: Not Supported 00:13:28.015 Abort Command Limit: 4 00:13:28.015 Async Event Request Limit: 4 00:13:28.015 Number of Firmware Slots: N/A 00:13:28.015 Firmware Slot 1 Read-Only: N/A 00:13:28.015 Firmware Activation Without Reset: N/A 00:13:28.015 Multiple Update Detection Support: N/A 00:13:28.015 Firmware Update Granularity: No Information Provided 00:13:28.015 Per-Namespace SMART Log: No 00:13:28.015 Asymmetric Namespace Access Log Page: Not Supported 00:13:28.015 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:28.015 Command Effects Log Page: Supported 00:13:28.015 Get Log Page Extended Data: Supported 00:13:28.015 Telemetry Log Pages: Not Supported 00:13:28.015 Persistent Event Log Pages: Not Supported 00:13:28.015 Supported Log Pages Log Page: May Support 00:13:28.015 Commands Supported & Effects Log Page: Not Supported 00:13:28.015 Feature Identifiers & Effects Log Page:May Support 00:13:28.015 NVMe-MI Commands & Effects Log Page: May Support 00:13:28.015 Data Area 4 for Telemetry Log: Not Supported 00:13:28.015 Error Log Page Entries Supported: 128 00:13:28.015 Keep Alive: Supported 00:13:28.015 Keep Alive Granularity: 10000 ms 00:13:28.015 00:13:28.015 NVM Command Set Attributes 00:13:28.015 ========================== 00:13:28.015 Submission Queue Entry Size 00:13:28.015 Max: 64 00:13:28.015 Min: 64 00:13:28.015 Completion Queue Entry Size 00:13:28.015 Max: 16 00:13:28.015 Min: 16 00:13:28.015 Number of Namespaces: 32 00:13:28.015 Compare Command: Supported 00:13:28.015 Write Uncorrectable Command: Not Supported 00:13:28.015 Dataset Management Command: Supported 00:13:28.015 Write Zeroes Command: Supported 00:13:28.015 Set Features Save Field: Not Supported 00:13:28.015 Reservations: Supported 00:13:28.015 Timestamp: Not Supported 00:13:28.015 Copy: Supported 00:13:28.015 Volatile Write Cache: Present 00:13:28.015 Atomic Write Unit (Normal): 1 00:13:28.015 Atomic Write Unit (PFail): 1 00:13:28.015 Atomic Compare & Write Unit: 1 00:13:28.015 Fused Compare & Write: Supported 00:13:28.015 Scatter-Gather List 00:13:28.015 SGL Command Set: Supported 00:13:28.015 SGL Keyed: Supported 00:13:28.015 SGL Bit Bucket Descriptor: Not Supported 00:13:28.015 SGL Metadata Pointer: Not Supported 00:13:28.015 Oversized SGL: Not Supported 00:13:28.015 SGL Metadata Address: Not Supported 00:13:28.015 SGL Offset: Supported 00:13:28.015 Transport SGL Data Block: Not Supported 00:13:28.015 Replay Protected Memory Block: Not Supported 00:13:28.015 00:13:28.015 Firmware Slot Information 00:13:28.015 ========================= 00:13:28.015 Active slot: 1 00:13:28.015 Slot 1 Firmware Revision: 24.09 00:13:28.015 00:13:28.015 00:13:28.015 Commands Supported and Effects 00:13:28.015 ============================== 00:13:28.015 Admin Commands 00:13:28.015 -------------- 00:13:28.015 Get Log Page (02h): Supported 00:13:28.015 Identify (06h): Supported 00:13:28.015 Abort (08h): Supported 00:13:28.015 Set Features (09h): Supported 00:13:28.015 Get Features (0Ah): Supported 00:13:28.015 Asynchronous Event Request (0Ch): Supported 00:13:28.015 Keep Alive (18h): Supported 00:13:28.015 I/O Commands 00:13:28.015 ------------ 00:13:28.015 Flush (00h): Supported LBA-Change 00:13:28.015 Write (01h): Supported LBA-Change 00:13:28.015 Read (02h): Supported 00:13:28.015 Compare (05h): Supported 00:13:28.015 Write Zeroes (08h): Supported LBA-Change 00:13:28.015 Dataset Management (09h): Supported LBA-Change 00:13:28.015 Copy (19h): Supported LBA-Change 00:13:28.015 00:13:28.016 Error Log 00:13:28.016 ========= 00:13:28.016 00:13:28.016 Arbitration 00:13:28.016 =========== 00:13:28.016 Arbitration Burst: 1 00:13:28.016 00:13:28.016 Power Management 00:13:28.016 ================ 00:13:28.016 Number of Power States: 1 00:13:28.016 Current Power State: Power State #0 00:13:28.016 Power State #0: 00:13:28.016 Max Power: 0.00 W 00:13:28.016 Non-Operational State: Operational 00:13:28.016 Entry Latency: Not Reported 00:13:28.016 Exit Latency: Not Reported 00:13:28.016 Relative Read Throughput: 0 00:13:28.016 Relative Read Latency: 0 00:13:28.016 Relative Write Throughput: 0 00:13:28.016 Relative Write Latency: 0 00:13:28.016 Idle Power: Not Reported 00:13:28.016 Active Power: Not Reported 00:13:28.016 Non-Operational Permissive Mode: Not Supported 00:13:28.016 00:13:28.016 Health Information 00:13:28.016 ================== 00:13:28.016 Critical Warnings: 00:13:28.016 Available Spare Space: OK 00:13:28.016 Temperature: OK 00:13:28.016 Device Reliability: OK 00:13:28.016 Read Only: No 00:13:28.016 Volatile Memory Backup: OK 00:13:28.016 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:28.016 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:28.016 Available Spare: 0% 00:13:28.016 Available Spare Threshold: 0% 00:13:28.016 Life Percentage Used:[2024-07-15 20:49:49.668949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a063c0) on tqpair=0x19c42c0 00:13:28.016 [2024-07-15 20:49:49.669039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19c42c0) 00:13:28.016 [2024-07-15 20:49:49.669051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.016 [2024-07-15 20:49:49.669065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a063c0, cid 7, qid 0 00:13:28.016 [2024-07-15 20:49:49.669107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.016 [2024-07-15 20:49:49.669112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.016 [2024-07-15 20:49:49.669116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a063c0) on tqpair=0x19c42c0 00:13:28.016 [2024-07-15 20:49:49.669150] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:13:28.016 [2024-07-15 20:49:49.669159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05940) on tqpair=0x19c42c0 00:13:28.016 [2024-07-15 20:49:49.669174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.016 [2024-07-15 20:49:49.669180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05ac0) on tqpair=0x19c42c0 00:13:28.016 [2024-07-15 20:49:49.669185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.016 [2024-07-15 20:49:49.669190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05c40) on tqpair=0x19c42c0 00:13:28.016 [2024-07-15 20:49:49.669195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.016 [2024-07-15 20:49:49.669200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.016 [2024-07-15 20:49:49.669204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.016 [2024-07-15 20:49:49.669211] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.016 [2024-07-15 20:49:49.669225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.016 [2024-07-15 20:49:49.669240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.016 [2024-07-15 20:49:49.669277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.016 [2024-07-15 20:49:49.669283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.016 [2024-07-15 20:49:49.669287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.016 [2024-07-15 20:49:49.669297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.016 [2024-07-15 20:49:49.669310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.016 [2024-07-15 20:49:49.669325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.016 [2024-07-15 20:49:49.669372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.016 [2024-07-15 20:49:49.669378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.016 [2024-07-15 20:49:49.669381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669385] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.016 [2024-07-15 20:49:49.669389] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:13:28.016 [2024-07-15 20:49:49.669394] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:13:28.016 [2024-07-15 20:49:49.669402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.016 [2024-07-15 20:49:49.669416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.016 [2024-07-15 20:49:49.669428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.016 [2024-07-15 20:49:49.669462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.016 [2024-07-15 20:49:49.669468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.016 [2024-07-15 20:49:49.669472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.016 [2024-07-15 20:49:49.669484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.016 [2024-07-15 20:49:49.669498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.016 [2024-07-15 20:49:49.669510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.016 [2024-07-15 20:49:49.669544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.016 [2024-07-15 20:49:49.669550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.016 [2024-07-15 20:49:49.669554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.016 [2024-07-15 20:49:49.669566] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.016 [2024-07-15 20:49:49.669579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.016 [2024-07-15 20:49:49.669592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.016 [2024-07-15 20:49:49.669630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.016 [2024-07-15 20:49:49.669636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.016 [2024-07-15 20:49:49.669639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.016 [2024-07-15 20:49:49.669651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.016 [2024-07-15 20:49:49.669659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.016 [2024-07-15 20:49:49.669665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.016 [2024-07-15 20:49:49.669677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.016 [2024-07-15 20:49:49.669711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.016 [2024-07-15 20:49:49.669717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.017 [2024-07-15 20:49:49.669720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.669724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.017 [2024-07-15 20:49:49.669732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.669736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.669740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.017 [2024-07-15 20:49:49.669746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.017 [2024-07-15 20:49:49.669759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.017 [2024-07-15 20:49:49.669800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.017 [2024-07-15 20:49:49.669806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.017 [2024-07-15 20:49:49.669809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.669813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.017 [2024-07-15 20:49:49.669822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.669826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.669829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.017 [2024-07-15 20:49:49.669835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.017 [2024-07-15 20:49:49.669847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.017 [2024-07-15 20:49:49.669881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.017 [2024-07-15 20:49:49.669887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.017 [2024-07-15 20:49:49.669890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.669894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.017 [2024-07-15 20:49:49.669902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.669906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.669910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.017 [2024-07-15 20:49:49.669916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.017 [2024-07-15 20:49:49.669928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.017 [2024-07-15 20:49:49.669967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.017 [2024-07-15 20:49:49.669973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.017 [2024-07-15 20:49:49.669977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.669980] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.017 [2024-07-15 20:49:49.669997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.670002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.670006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.017 [2024-07-15 20:49:49.670012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.017 [2024-07-15 20:49:49.670025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.017 [2024-07-15 20:49:49.670061] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.017 [2024-07-15 20:49:49.670067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.017 [2024-07-15 20:49:49.670070] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.670074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.017 [2024-07-15 20:49:49.670083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.670086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.670090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.017 [2024-07-15 20:49:49.670096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.017 [2024-07-15 20:49:49.670109] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.017 [2024-07-15 20:49:49.670151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.017 [2024-07-15 20:49:49.670156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.017 [2024-07-15 20:49:49.670160] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.670164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.017 [2024-07-15 20:49:49.674191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.674197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.674200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c42c0) 00:13:28.017 [2024-07-15 20:49:49.674207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:28.017 [2024-07-15 20:49:49.674224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a05dc0, cid 3, qid 0 00:13:28.017 [2024-07-15 20:49:49.674263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:28.017 [2024-07-15 20:49:49.674269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:28.017 [2024-07-15 20:49:49.674273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:28.017 [2024-07-15 20:49:49.674277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a05dc0) on tqpair=0x19c42c0 00:13:28.017 [2024-07-15 20:49:49.674284] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:13:28.017 0% 00:13:28.017 Data Units Read: 0 00:13:28.017 Data Units Written: 0 00:13:28.017 Host Read Commands: 0 00:13:28.017 Host Write Commands: 0 00:13:28.017 Controller Busy Time: 0 minutes 00:13:28.017 Power Cycles: 0 00:13:28.017 Power On Hours: 0 hours 00:13:28.017 Unsafe Shutdowns: 0 00:13:28.017 Unrecoverable Media Errors: 0 00:13:28.017 Lifetime Error Log Entries: 0 00:13:28.017 Warning Temperature Time: 0 minutes 00:13:28.017 Critical Temperature Time: 0 minutes 00:13:28.017 00:13:28.017 Number of Queues 00:13:28.017 ================ 00:13:28.017 Number of I/O Submission Queues: 127 00:13:28.017 Number of I/O Completion Queues: 127 00:13:28.017 00:13:28.017 Active Namespaces 00:13:28.017 ================= 00:13:28.017 Namespace ID:1 00:13:28.017 Error Recovery Timeout: Unlimited 00:13:28.017 Command Set Identifier: NVM (00h) 00:13:28.017 Deallocate: Supported 00:13:28.017 Deallocated/Unwritten Error: Not Supported 00:13:28.017 Deallocated Read Value: Unknown 00:13:28.017 Deallocate in Write Zeroes: Not Supported 00:13:28.017 Deallocated Guard Field: 0xFFFF 00:13:28.017 Flush: Supported 00:13:28.017 Reservation: Supported 00:13:28.017 Namespace Sharing Capabilities: Multiple Controllers 00:13:28.017 Size (in LBAs): 131072 (0GiB) 00:13:28.017 Capacity (in LBAs): 131072 (0GiB) 00:13:28.017 Utilization (in LBAs): 131072 (0GiB) 00:13:28.017 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:28.017 EUI64: ABCDEF0123456789 00:13:28.017 UUID: af41ed73-d137-40b2-8e08-59cd8c02e821 00:13:28.017 Thin Provisioning: Not Supported 00:13:28.017 Per-NS Atomic Units: Yes 00:13:28.017 Atomic Boundary Size (Normal): 0 00:13:28.017 Atomic Boundary Size (PFail): 0 00:13:28.017 Atomic Boundary Offset: 0 00:13:28.017 Maximum Single Source Range Length: 65535 00:13:28.017 Maximum Copy Length: 65535 00:13:28.017 Maximum Source Range Count: 1 00:13:28.017 NGUID/EUI64 Never Reused: No 00:13:28.017 Namespace Write Protected: No 00:13:28.017 Number of LBA Formats: 1 00:13:28.017 Current LBA Format: LBA Format #00 00:13:28.017 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:28.017 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.017 rmmod nvme_tcp 00:13:28.017 rmmod nvme_fabrics 00:13:28.017 rmmod nvme_keyring 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74173 ']' 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74173 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74173 ']' 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74173 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74173 00:13:28.017 killing process with pid 74173 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74173' 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74173 00:13:28.017 20:49:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74173 00:13:28.277 20:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:28.277 20:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:28.277 20:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:28.277 20:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.277 20:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:28.277 20:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.277 20:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.277 20:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.277 20:49:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:28.277 00:13:28.277 real 0m2.498s 00:13:28.277 user 0m6.367s 00:13:28.277 sys 0m0.756s 00:13:28.277 20:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.277 20:49:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:28.277 ************************************ 00:13:28.277 END TEST nvmf_identify 00:13:28.277 ************************************ 00:13:28.277 20:49:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:28.277 20:49:50 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:28.277 20:49:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:28.277 20:49:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.277 20:49:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:28.537 ************************************ 00:13:28.537 START TEST nvmf_perf 00:13:28.537 ************************************ 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:28.537 * Looking for test storage... 00:13:28.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:28.537 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:28.538 Cannot find device "nvmf_tgt_br" 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:28.538 Cannot find device "nvmf_tgt_br2" 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:28.538 Cannot find device "nvmf_tgt_br" 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:13:28.538 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:28.797 Cannot find device "nvmf_tgt_br2" 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:28.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:28.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:28.797 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:29.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:13:29.057 00:13:29.057 --- 10.0.0.2 ping statistics --- 00:13:29.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.057 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:29.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:29.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:13:29.057 00:13:29.057 --- 10.0.0.3 ping statistics --- 00:13:29.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.057 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:29.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:13:29.057 00:13:29.057 --- 10.0.0.1 ping statistics --- 00:13:29.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.057 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74377 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74377 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 74377 ']' 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.057 20:49:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:29.057 [2024-07-15 20:49:50.844529] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:29.057 [2024-07-15 20:49:50.844594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.316 [2024-07-15 20:49:50.985747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.316 [2024-07-15 20:49:51.071287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.316 [2024-07-15 20:49:51.071340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.316 [2024-07-15 20:49:51.071350] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.316 [2024-07-15 20:49:51.071359] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.316 [2024-07-15 20:49:51.071365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.316 [2024-07-15 20:49:51.071569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.316 [2024-07-15 20:49:51.071753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.316 [2024-07-15 20:49:51.072503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.316 [2024-07-15 20:49:51.072504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.316 [2024-07-15 20:49:51.113171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:29.885 20:49:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.885 20:49:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:13:29.885 20:49:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.885 20:49:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:29.885 20:49:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:29.885 20:49:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.885 20:49:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:29.885 20:49:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:30.454 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:30.454 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:30.454 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:13:30.454 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:30.712 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:30.712 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:13:30.712 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:30.712 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:30.712 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:30.970 [2024-07-15 20:49:52.637319] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.970 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:30.970 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:30.970 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:31.228 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:31.228 20:49:52 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:31.485 20:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.485 [2024-07-15 20:49:53.349469] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.485 20:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:31.744 20:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:13:31.744 20:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:31.744 20:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:31.744 20:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:33.121 Initializing NVMe Controllers 00:13:33.121 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:33.121 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:33.121 Initialization complete. Launching workers. 00:13:33.121 ======================================================== 00:13:33.121 Latency(us) 00:13:33.121 Device Information : IOPS MiB/s Average min max 00:13:33.121 PCIE (0000:00:10.0) NSID 1 from core 0: 18817.00 73.50 1700.35 635.07 5959.40 00:13:33.121 ======================================================== 00:13:33.121 Total : 18817.00 73.50 1700.35 635.07 5959.40 00:13:33.121 00:13:33.121 20:49:54 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:34.148 Initializing NVMe Controllers 00:13:34.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:34.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:34.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:34.148 Initialization complete. Launching workers. 00:13:34.148 ======================================================== 00:13:34.148 Latency(us) 00:13:34.148 Device Information : IOPS MiB/s Average min max 00:13:34.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5070.98 19.81 196.98 77.49 4182.51 00:13:34.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8125.56 3901.18 12094.51 00:13:34.148 ======================================================== 00:13:34.148 Total : 5194.98 20.29 386.23 77.49 12094.51 00:13:34.148 00:13:34.148 20:49:56 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:35.523 Initializing NVMe Controllers 00:13:35.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:35.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:35.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:35.523 Initialization complete. Launching workers. 00:13:35.523 ======================================================== 00:13:35.523 Latency(us) 00:13:35.523 Device Information : IOPS MiB/s Average min max 00:13:35.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11395.32 44.51 2808.59 478.14 6533.02 00:13:35.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4009.02 15.66 8012.49 6078.94 9862.10 00:13:35.523 ======================================================== 00:13:35.523 Total : 15404.35 60.17 4162.92 478.14 9862.10 00:13:35.523 00:13:35.523 20:49:57 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:35.523 20:49:57 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:38.053 Initializing NVMe Controllers 00:13:38.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:38.053 Controller IO queue size 128, less than required. 00:13:38.053 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:38.053 Controller IO queue size 128, less than required. 00:13:38.053 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:38.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:38.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:38.053 Initialization complete. Launching workers. 00:13:38.053 ======================================================== 00:13:38.053 Latency(us) 00:13:38.053 Device Information : IOPS MiB/s Average min max 00:13:38.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2325.45 581.36 55854.10 26495.58 85667.87 00:13:38.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 663.84 165.96 198346.19 50066.92 320847.77 00:13:38.053 ======================================================== 00:13:38.053 Total : 2989.29 747.32 87497.86 26495.58 320847.77 00:13:38.053 00:13:38.053 20:49:59 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:13:38.311 Initializing NVMe Controllers 00:13:38.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:38.311 Controller IO queue size 128, less than required. 00:13:38.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:38.312 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:38.312 Controller IO queue size 128, less than required. 00:13:38.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:38.312 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:38.312 WARNING: Some requested NVMe devices were skipped 00:13:38.312 No valid NVMe controllers or AIO or URING devices found 00:13:38.312 20:50:00 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:13:40.839 Initializing NVMe Controllers 00:13:40.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.839 Controller IO queue size 128, less than required. 00:13:40.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:40.839 Controller IO queue size 128, less than required. 00:13:40.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:40.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:40.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:40.839 Initialization complete. Launching workers. 00:13:40.839 00:13:40.839 ==================== 00:13:40.839 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:13:40.839 TCP transport: 00:13:40.839 polls: 13015 00:13:40.839 idle_polls: 8227 00:13:40.839 sock_completions: 4788 00:13:40.839 nvme_completions: 8337 00:13:40.839 submitted_requests: 12482 00:13:40.839 queued_requests: 1 00:13:40.839 00:13:40.839 ==================== 00:13:40.839 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:13:40.839 TCP transport: 00:13:40.839 polls: 13153 00:13:40.839 idle_polls: 7946 00:13:40.839 sock_completions: 5207 00:13:40.839 nvme_completions: 8537 00:13:40.839 submitted_requests: 12920 00:13:40.839 queued_requests: 1 00:13:40.839 ======================================================== 00:13:40.839 Latency(us) 00:13:40.839 Device Information : IOPS MiB/s Average min max 00:13:40.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2080.04 520.01 62322.87 26057.94 100912.45 00:13:40.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2129.95 532.49 60674.09 32477.75 96891.42 00:13:40.839 ======================================================== 00:13:40.839 Total : 4209.99 1052.50 61488.70 26057.94 100912.45 00:13:40.839 00:13:40.839 20:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:13:40.839 20:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:41.096 rmmod nvme_tcp 00:13:41.096 rmmod nvme_fabrics 00:13:41.096 rmmod nvme_keyring 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74377 ']' 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74377 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 74377 ']' 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 74377 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:41.096 20:50:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74377 00:13:41.096 killing process with pid 74377 00:13:41.097 20:50:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:41.097 20:50:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:41.097 20:50:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74377' 00:13:41.097 20:50:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 74377 00:13:41.097 20:50:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 74377 00:13:41.661 20:50:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:41.661 20:50:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:41.661 20:50:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:41.661 20:50:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.661 20:50:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:41.661 20:50:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.661 20:50:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.661 20:50:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.919 20:50:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:41.919 ************************************ 00:13:41.919 END TEST nvmf_perf 00:13:41.919 ************************************ 00:13:41.919 00:13:41.919 real 0m13.459s 00:13:41.919 user 0m47.609s 00:13:41.919 sys 0m4.274s 00:13:41.919 20:50:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:41.919 20:50:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:41.919 20:50:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:41.919 20:50:03 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:41.919 20:50:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:41.919 20:50:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.919 20:50:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.919 ************************************ 00:13:41.919 START TEST nvmf_fio_host 00:13:41.919 ************************************ 00:13:41.919 20:50:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:42.179 * Looking for test storage... 00:13:42.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:42.179 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:42.180 Cannot find device "nvmf_tgt_br" 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:42.180 Cannot find device "nvmf_tgt_br2" 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:42.180 Cannot find device "nvmf_tgt_br" 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:42.180 Cannot find device "nvmf_tgt_br2" 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:13:42.180 20:50:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:42.180 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:42.180 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:42.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.180 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:13:42.180 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:42.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.180 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:13:42.180 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:42.180 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:42.180 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:42.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:13:42.440 00:13:42.440 --- 10.0.0.2 ping statistics --- 00:13:42.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.440 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:42.440 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:42.440 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:13:42.440 00:13:42.440 --- 10.0.0.3 ping statistics --- 00:13:42.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.440 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:42.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:42.440 00:13:42.440 --- 10.0.0.1 ping statistics --- 00:13:42.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.440 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:42.440 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:42.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74775 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74775 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 74775 ']' 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.441 20:50:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:42.711 [2024-07-15 20:50:04.387464] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:42.711 [2024-07-15 20:50:04.387532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.711 [2024-07-15 20:50:04.529839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.711 [2024-07-15 20:50:04.608304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.711 [2024-07-15 20:50:04.608357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.711 [2024-07-15 20:50:04.608368] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.711 [2024-07-15 20:50:04.608376] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.711 [2024-07-15 20:50:04.608382] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.711 [2024-07-15 20:50:04.608577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.711 [2024-07-15 20:50:04.608828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.711 [2024-07-15 20:50:04.609668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.711 [2024-07-15 20:50:04.609670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.972 [2024-07-15 20:50:04.651140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:43.540 20:50:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.540 20:50:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:13:43.540 20:50:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:43.540 [2024-07-15 20:50:05.385576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.540 20:50:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:13:43.540 20:50:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.540 20:50:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:43.798 20:50:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:43.798 Malloc1 00:13:43.798 20:50:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:44.056 20:50:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.315 20:50:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.315 [2024-07-15 20:50:06.183441] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.315 20:50:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:44.575 20:50:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:44.834 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:44.834 fio-3.35 00:13:44.834 Starting 1 thread 00:13:47.368 00:13:47.368 test: (groupid=0, jobs=1): err= 0: pid=74853: Mon Jul 15 20:50:08 2024 00:13:47.368 read: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(91.9MiB/2006msec) 00:13:47.368 slat (nsec): min=1549, max=516454, avg=1710.15, stdev=3911.61 00:13:47.368 clat (usec): min=3134, max=10477, avg=5696.96, stdev=392.36 00:13:47.368 lat (usec): min=3201, max=10479, avg=5698.67, stdev=392.34 00:13:47.368 clat percentiles (usec): 00:13:47.368 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5407], 00:13:47.368 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5800], 00:13:47.368 | 70.00th=[ 5866], 80.00th=[ 5997], 90.00th=[ 6128], 95.00th=[ 6325], 00:13:47.368 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 8160], 99.95th=[ 9503], 00:13:47.368 | 99.99th=[10421] 00:13:47.368 bw ( KiB/s): min=45896, max=47624, per=100.00%, avg=46934.00, stdev=775.42, samples=4 00:13:47.368 iops : min=11474, max=11906, avg=11733.50, stdev=193.85, samples=4 00:13:47.368 write: IOPS=11.7k, BW=45.5MiB/s (47.7MB/s)(91.3MiB/2006msec); 0 zone resets 00:13:47.368 slat (nsec): min=1585, max=269185, avg=1746.61, stdev=1944.09 00:13:47.368 clat (usec): min=2965, max=10339, avg=5180.81, stdev=360.71 00:13:47.368 lat (usec): min=2981, max=10340, avg=5182.55, stdev=360.81 00:13:47.368 clat percentiles (usec): 00:13:47.368 | 1.00th=[ 4293], 5.00th=[ 4686], 10.00th=[ 4817], 20.00th=[ 4948], 00:13:47.368 | 30.00th=[ 5014], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5276], 00:13:47.368 | 70.00th=[ 5342], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:13:47.368 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 8586], 99.95th=[ 9372], 00:13:47.368 | 99.99th=[ 9634] 00:13:47.368 bw ( KiB/s): min=46216, max=47040, per=100.00%, avg=46616.00, stdev=420.23, samples=4 00:13:47.368 iops : min=11554, max=11760, avg=11654.00, stdev=105.06, samples=4 00:13:47.368 lat (msec) : 4=0.43%, 10=99.56%, 20=0.01% 00:13:47.368 cpu : usr=68.38%, sys=24.74%, ctx=12, majf=0, minf=6 00:13:47.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:47.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:47.368 issued rwts: total=23527,23375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.368 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:47.368 00:13:47.368 Run status group 0 (all jobs): 00:13:47.368 READ: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=91.9MiB (96.4MB), run=2006-2006msec 00:13:47.368 WRITE: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=91.3MiB (95.7MB), run=2006-2006msec 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:47.368 20:50:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:13:47.368 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:13:47.368 fio-3.35 00:13:47.368 Starting 1 thread 00:13:49.895 00:13:49.895 test: (groupid=0, jobs=1): err= 0: pid=74896: Mon Jul 15 20:50:11 2024 00:13:49.895 read: IOPS=10.9k, BW=171MiB/s (179MB/s)(342MiB/2005msec) 00:13:49.895 slat (nsec): min=2491, max=87096, avg=2729.59, stdev=1401.02 00:13:49.895 clat (usec): min=1605, max=13896, avg=6613.28, stdev=2197.73 00:13:49.895 lat (usec): min=1607, max=13898, avg=6616.01, stdev=2197.83 00:13:49.895 clat percentiles (usec): 00:13:49.895 | 1.00th=[ 3032], 5.00th=[ 3621], 10.00th=[ 4015], 20.00th=[ 4621], 00:13:49.895 | 30.00th=[ 5211], 40.00th=[ 5735], 50.00th=[ 6325], 60.00th=[ 6915], 00:13:49.895 | 70.00th=[ 7635], 80.00th=[ 8291], 90.00th=[ 9634], 95.00th=[11076], 00:13:49.895 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13566], 99.95th=[13829], 00:13:49.895 | 99.99th=[13829] 00:13:49.895 bw ( KiB/s): min=83680, max=92896, per=51.18%, avg=89416.00, stdev=4100.28, samples=4 00:13:49.895 iops : min= 5230, max= 5806, avg=5588.50, stdev=256.27, samples=4 00:13:49.895 write: IOPS=6326, BW=98.9MiB/s (104MB/s)(182MiB/1840msec); 0 zone resets 00:13:49.895 slat (usec): min=28, max=453, avg=30.19, stdev= 7.86 00:13:49.895 clat (usec): min=2937, max=17226, avg=9020.27, stdev=1738.03 00:13:49.895 lat (usec): min=2966, max=17255, avg=9050.46, stdev=1739.82 00:13:49.895 clat percentiles (usec): 00:13:49.895 | 1.00th=[ 5932], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7570], 00:13:49.895 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9241], 00:13:49.895 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11338], 95.00th=[11994], 00:13:49.895 | 99.00th=[14353], 99.50th=[15139], 99.90th=[16450], 99.95th=[16712], 00:13:49.895 | 99.99th=[16909] 00:13:49.895 bw ( KiB/s): min=87552, max=96544, per=91.63%, avg=92752.00, stdev=3860.43, samples=4 00:13:49.895 iops : min= 5472, max= 6034, avg=5797.00, stdev=241.28, samples=4 00:13:49.895 lat (msec) : 2=0.01%, 4=6.50%, 10=79.01%, 20=14.48% 00:13:49.895 cpu : usr=82.83%, sys=13.47%, ctx=4, majf=0, minf=10 00:13:49.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:49.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:49.895 issued rwts: total=21892,11641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:49.895 00:13:49.895 Run status group 0 (all jobs): 00:13:49.895 READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=342MiB (359MB), run=2005-2005msec 00:13:49.896 WRITE: bw=98.9MiB/s (104MB/s), 98.9MiB/s-98.9MiB/s (104MB/s-104MB/s), io=182MiB (191MB), run=1840-1840msec 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.896 rmmod nvme_tcp 00:13:49.896 rmmod nvme_fabrics 00:13:49.896 rmmod nvme_keyring 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74775 ']' 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74775 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 74775 ']' 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 74775 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74775 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74775' 00:13:49.896 killing process with pid 74775 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 74775 00:13:49.896 20:50:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 74775 00:13:50.153 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:50.154 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:50.154 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:50.154 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.154 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:50.154 20:50:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.154 20:50:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.154 20:50:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.154 20:50:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:50.154 ************************************ 00:13:50.154 END TEST nvmf_fio_host 00:13:50.154 ************************************ 00:13:50.154 00:13:50.154 real 0m8.337s 00:13:50.154 user 0m33.172s 00:13:50.154 sys 0m2.570s 00:13:50.154 20:50:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.154 20:50:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:50.413 20:50:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:50.413 20:50:12 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:13:50.413 20:50:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:50.413 20:50:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.413 20:50:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:50.413 ************************************ 00:13:50.413 START TEST nvmf_failover 00:13:50.413 ************************************ 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:13:50.413 * Looking for test storage... 00:13:50.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:13:50.413 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:50.414 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:50.673 Cannot find device "nvmf_tgt_br" 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.673 Cannot find device "nvmf_tgt_br2" 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:50.673 Cannot find device "nvmf_tgt_br" 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:50.673 Cannot find device "nvmf_tgt_br2" 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:50.673 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:50.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:13:50.932 00:13:50.932 --- 10.0.0.2 ping statistics --- 00:13:50.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.932 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:50.932 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:50.932 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:13:50.932 00:13:50.932 --- 10.0.0.3 ping statistics --- 00:13:50.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.932 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:50.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:13:50.932 00:13:50.932 --- 10.0.0.1 ping statistics --- 00:13:50.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.932 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75110 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75110 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75110 ']' 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.932 20:50:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:50.932 [2024-07-15 20:50:12.750338] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:13:50.932 [2024-07-15 20:50:12.750399] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.190 [2024-07-15 20:50:12.891570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:51.190 [2024-07-15 20:50:12.979651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.190 [2024-07-15 20:50:12.979700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.190 [2024-07-15 20:50:12.979710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.190 [2024-07-15 20:50:12.979718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.190 [2024-07-15 20:50:12.979724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.190 [2024-07-15 20:50:12.979927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.190 [2024-07-15 20:50:12.980830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.190 [2024-07-15 20:50:12.980833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.190 [2024-07-15 20:50:13.022499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:51.757 20:50:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.757 20:50:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:13:51.757 20:50:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:51.757 20:50:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:51.757 20:50:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:51.757 20:50:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.757 20:50:13 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.015 [2024-07-15 20:50:13.796401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.015 20:50:13 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:52.273 Malloc0 00:13:52.273 20:50:14 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:52.531 20:50:14 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:52.792 20:50:14 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.792 [2024-07-15 20:50:14.630851] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.792 20:50:14 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:53.051 [2024-07-15 20:50:14.802688] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:53.051 20:50:14 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:13:53.309 [2024-07-15 20:50:14.970546] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:13:53.309 20:50:14 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:13:53.309 20:50:14 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75166 00:13:53.309 20:50:14 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:53.309 20:50:14 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75166 /var/tmp/bdevperf.sock 00:13:53.309 20:50:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75166 ']' 00:13:53.309 20:50:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.309 20:50:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.309 20:50:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.309 20:50:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.309 20:50:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:54.246 20:50:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.246 20:50:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:13:54.246 20:50:15 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:54.246 NVMe0n1 00:13:54.246 20:50:16 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:54.505 00:13:54.505 20:50:16 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75185 00:13:54.505 20:50:16 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:54.505 20:50:16 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:13:55.880 20:50:17 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.880 20:50:17 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:13:59.196 20:50:20 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:59.196 00:13:59.196 20:50:20 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:59.196 20:50:21 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:14:02.484 20:50:24 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.484 [2024-07-15 20:50:24.191334] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.484 20:50:24 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:14:03.417 20:50:25 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:03.676 20:50:25 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75185 00:14:10.264 0 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75166 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75166 ']' 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75166 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75166 00:14:10.264 killing process with pid 75166 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75166' 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75166 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75166 00:14:10.264 20:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:10.264 [2024-07-15 20:50:15.016592] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:14:10.264 [2024-07-15 20:50:15.016667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75166 ] 00:14:10.264 [2024-07-15 20:50:15.160439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.264 [2024-07-15 20:50:15.246929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.264 [2024-07-15 20:50:15.288158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:10.264 Running I/O for 15 seconds... 00:14:10.264 [2024-07-15 20:50:17.531528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.264 [2024-07-15 20:50:17.531588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.264 [2024-07-15 20:50:17.531604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.264 [2024-07-15 20:50:17.531617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.264 [2024-07-15 20:50:17.531630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.264 [2024-07-15 20:50:17.531642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.264 [2024-07-15 20:50:17.531654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.264 [2024-07-15 20:50:17.531666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.264 [2024-07-15 20:50:17.531678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aa570 is same with the state(5) to be set 00:14:10.264 [2024-07-15 20:50:17.531884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.264 [2024-07-15 20:50:17.531905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.264 [2024-07-15 20:50:17.531925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.264 [2024-07-15 20:50:17.531938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.264 [2024-07-15 20:50:17.531952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.531964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.531978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.531990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.532985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.532997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.533022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.533048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.533073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.533105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.533131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.533156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.533191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.533217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.533243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.533269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.265 [2024-07-15 20:50:17.533295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.265 [2024-07-15 20:50:17.533308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.533976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.533990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.266 [2024-07-15 20:50:17.534597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.266 [2024-07-15 20:50:17.534610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:17.534622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:17.534648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:17.534673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:17.534699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:17.534724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:17.534750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:17.534779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:17.534806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:17.534832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:17.534858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.534885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.534911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.534937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.534962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.534976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.534988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.535014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.535039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.535064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.535090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.535120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.535146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.535178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.535204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.535233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:17.535259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:17.535285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16fb7c0 is same with the state(5) to be set 00:14:10.267 [2024-07-15 20:50:17.535314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:10.267 [2024-07-15 20:50:17.535323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:10.267 [2024-07-15 20:50:17.535332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84120 len:8 PRP1 0x0 PRP2 0x0 00:14:10.267 [2024-07-15 20:50:17.535344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:17.535394] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16fb7c0 was disconnected and freed. reset controller. 00:14:10.267 [2024-07-15 20:50:17.535409] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:10.267 [2024-07-15 20:50:17.535423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:10.267 [2024-07-15 20:50:17.538139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:10.267 [2024-07-15 20:50:17.538190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aa570 (9): Bad file descriptor 00:14:10.267 [2024-07-15 20:50:17.566378] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:10.267 [2024-07-15 20:50:20.996091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.267 [2024-07-15 20:50:20.996556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:20.996582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:20.996608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:20.996636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:20.996662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.267 [2024-07-15 20:50:20.996675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.267 [2024-07-15 20:50:20.996687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.996983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.996995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.268 [2024-07-15 20:50:20.997212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.268 [2024-07-15 20:50:20.997237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.268 [2024-07-15 20:50:20.997263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.268 [2024-07-15 20:50:20.997289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.268 [2024-07-15 20:50:20.997314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.268 [2024-07-15 20:50:20.997339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.268 [2024-07-15 20:50:20.997365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.268 [2024-07-15 20:50:20.997390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.268 [2024-07-15 20:50:20.997415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.268 [2024-07-15 20:50:20.997764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.268 [2024-07-15 20:50:20.997776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.997790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.997802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.997816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.997827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.997845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.997857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.997871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.997884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.997897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.997909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.997923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.997935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.997949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.997961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.997975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.997987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.269 [2024-07-15 20:50:20.998683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.998985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.998996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.999010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.999022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.999035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.999047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.999061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.999073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.999086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.269 [2024-07-15 20:50:20.999098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.269 [2024-07-15 20:50:20.999111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:20.999123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:20.999150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:20.999190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:20.999217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:20.999243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:20.999269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:20.999294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:20.999320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:20.999345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:20.999370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:20.999396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:20.999421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:20.999447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:20.999472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:20.999497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:10.270 [2024-07-15 20:50:20.999543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:10.270 [2024-07-15 20:50:20.999552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31472 len:8 PRP1 0x0 PRP2 0x0 00:14:10.270 [2024-07-15 20:50:20.999564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999612] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x172f020 was disconnected and freed. reset controller. 00:14:10.270 [2024-07-15 20:50:20.999626] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:14:10.270 [2024-07-15 20:50:20.999669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.270 [2024-07-15 20:50:20.999682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.270 [2024-07-15 20:50:20.999710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.270 [2024-07-15 20:50:20.999735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.270 [2024-07-15 20:50:20.999760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:20.999772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:10.270 [2024-07-15 20:50:21.002496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:10.270 [2024-07-15 20:50:21.002534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aa570 (9): Bad file descriptor 00:14:10.270 [2024-07-15 20:50:21.032880] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:10.270 [2024-07-15 20:50:25.387046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:25.387108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:25.387144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:25.387183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:25.387209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:25.387253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:25.387279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.270 [2024-07-15 20:50:25.387698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:25.387724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:25.387749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:25.387775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:25.387800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.270 [2024-07-15 20:50:25.387826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.270 [2024-07-15 20:50:25.387840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.387852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.387866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.387878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.387892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.387904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.387922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.387935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.387949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.387961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.387975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.387987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.388780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.388982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.271 [2024-07-15 20:50:25.388994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.271 [2024-07-15 20:50:25.389008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.271 [2024-07-15 20:50:25.389020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.389422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.389976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.389989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.390001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.390038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.390064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.390090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.390116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.390143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.390176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.390209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.390234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.390261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.390291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:10.272 [2024-07-15 20:50:25.390317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.390343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.272 [2024-07-15 20:50:25.390357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.272 [2024-07-15 20:50:25.390369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.273 [2024-07-15 20:50:25.390383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.273 [2024-07-15 20:50:25.390395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.273 [2024-07-15 20:50:25.390409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.273 [2024-07-15 20:50:25.390421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.273 [2024-07-15 20:50:25.390434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.273 [2024-07-15 20:50:25.390447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.273 [2024-07-15 20:50:25.390461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.273 [2024-07-15 20:50:25.390473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.273 [2024-07-15 20:50:25.390486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:10.273 [2024-07-15 20:50:25.390498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.273 [2024-07-15 20:50:25.390511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172be50 is same with the state(5) to be set 00:14:10.273 [2024-07-15 20:50:25.390525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:10.273 [2024-07-15 20:50:25.390534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:10.273 [2024-07-15 20:50:25.390544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71136 len:8 PRP1 0x0 PRP2 0x0 00:14:10.273 [2024-07-15 20:50:25.390556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.273 [2024-07-15 20:50:25.390606] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x172be50 was disconnected and freed. reset controller. 00:14:10.273 [2024-07-15 20:50:25.390622] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:14:10.273 [2024-07-15 20:50:25.390669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.273 [2024-07-15 20:50:25.390684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.273 [2024-07-15 20:50:25.390703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.273 [2024-07-15 20:50:25.390716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.273 [2024-07-15 20:50:25.390728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.273 [2024-07-15 20:50:25.390741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.273 [2024-07-15 20:50:25.390753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:10.273 [2024-07-15 20:50:25.390766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:10.273 [2024-07-15 20:50:25.390778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:10.273 [2024-07-15 20:50:25.393492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:10.273 [2024-07-15 20:50:25.393530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aa570 (9): Bad file descriptor 00:14:10.273 [2024-07-15 20:50:25.422223] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:10.273 00:14:10.273 Latency(us) 00:14:10.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.273 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:10.273 Verification LBA range: start 0x0 length 0x4000 00:14:10.273 NVMe0n1 : 15.01 11758.41 45.93 265.27 0.00 10623.48 460.59 13896.79 00:14:10.273 =================================================================================================================== 00:14:10.273 Total : 11758.41 45.93 265.27 0.00 10623.48 460.59 13896.79 00:14:10.273 Received shutdown signal, test time was about 15.000000 seconds 00:14:10.273 00:14:10.273 Latency(us) 00:14:10.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.273 =================================================================================================================== 00:14:10.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75359 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75359 /var/tmp/bdevperf.sock 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75359 ']' 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.273 20:50:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:10.881 20:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.881 20:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:10.881 20:50:32 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:10.881 [2024-07-15 20:50:32.737433] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:10.881 20:50:32 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:11.139 [2024-07-15 20:50:32.917293] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:11.139 20:50:32 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:11.397 NVMe0n1 00:14:11.397 20:50:33 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:11.655 00:14:11.656 20:50:33 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:11.914 00:14:11.914 20:50:33 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:11.914 20:50:33 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:14:12.173 20:50:33 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:12.173 20:50:34 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:15.459 20:50:37 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:15.459 20:50:37 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:15.459 20:50:37 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75430 00:14:15.459 20:50:37 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:15.459 20:50:37 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 75430 00:14:16.831 0 00:14:16.831 20:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:16.831 [2024-07-15 20:50:31.725203] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:14:16.831 [2024-07-15 20:50:31.725317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75359 ] 00:14:16.831 [2024-07-15 20:50:31.865110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.831 [2024-07-15 20:50:31.944890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.831 [2024-07-15 20:50:31.985782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:16.831 [2024-07-15 20:50:34.034745] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:16.831 [2024-07-15 20:50:34.034846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.831 [2024-07-15 20:50:34.034866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.831 [2024-07-15 20:50:34.034882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.831 [2024-07-15 20:50:34.034894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.831 [2024-07-15 20:50:34.034907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.831 [2024-07-15 20:50:34.034920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.831 [2024-07-15 20:50:34.034932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.831 [2024-07-15 20:50:34.034944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.831 [2024-07-15 20:50:34.034956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:16.831 [2024-07-15 20:50:34.034997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:16.831 [2024-07-15 20:50:34.035020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1721570 (9): Bad file descriptor 00:14:16.831 [2024-07-15 20:50:34.045414] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:16.831 Running I/O for 1 seconds... 00:14:16.831 00:14:16.831 Latency(us) 00:14:16.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.831 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:16.831 Verification LBA range: start 0x0 length 0x4000 00:14:16.831 NVMe0n1 : 1.01 11432.03 44.66 0.00 0.00 11139.25 809.33 14949.58 00:14:16.831 =================================================================================================================== 00:14:16.831 Total : 11432.03 44.66 0.00 0.00 11139.25 809.33 14949.58 00:14:16.831 20:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:16.831 20:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:16.831 20:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:16.831 20:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:16.831 20:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:17.089 20:50:38 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:17.345 20:50:39 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 75359 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75359 ']' 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75359 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75359 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:20.628 killing process with pid 75359 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75359' 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75359 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75359 00:14:20.628 20:50:42 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.886 rmmod nvme_tcp 00:14:20.886 rmmod nvme_fabrics 00:14:20.886 rmmod nvme_keyring 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:14:20.886 20:50:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75110 ']' 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75110 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75110 ']' 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75110 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75110 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75110' 00:14:21.143 killing process with pid 75110 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75110 00:14:21.143 20:50:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75110 00:14:21.143 20:50:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.143 20:50:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.143 20:50:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.143 20:50:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.143 20:50:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.143 20:50:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.143 20:50:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.143 20:50:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.400 20:50:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:21.400 00:14:21.400 real 0m30.974s 00:14:21.400 user 1m57.407s 00:14:21.400 sys 0m6.248s 00:14:21.400 20:50:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:21.400 20:50:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:21.400 ************************************ 00:14:21.400 END TEST nvmf_failover 00:14:21.400 ************************************ 00:14:21.400 20:50:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:21.400 20:50:43 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:21.400 20:50:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:21.400 20:50:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.400 20:50:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.400 ************************************ 00:14:21.400 START TEST nvmf_host_discovery 00:14:21.400 ************************************ 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:21.400 * Looking for test storage... 00:14:21.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.400 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.656 20:50:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:21.657 Cannot find device "nvmf_tgt_br" 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:21.657 Cannot find device "nvmf_tgt_br2" 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:21.657 Cannot find device "nvmf_tgt_br" 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:21.657 Cannot find device "nvmf_tgt_br2" 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:21.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:21.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:21.657 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:21.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:14:21.914 00:14:21.914 --- 10.0.0.2 ping statistics --- 00:14:21.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.914 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:21.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:21.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:21.914 00:14:21.914 --- 10.0.0.3 ping statistics --- 00:14:21.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.914 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:21.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:21.914 00:14:21.914 --- 10.0.0.1 ping statistics --- 00:14:21.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.914 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75700 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75700 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 75700 ']' 00:14:21.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.914 20:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.915 20:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.915 20:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.915 20:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.915 20:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:21.915 [2024-07-15 20:50:43.761091] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:14:21.915 [2024-07-15 20:50:43.761181] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.172 [2024-07-15 20:50:43.903293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.172 [2024-07-15 20:50:43.984116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.172 [2024-07-15 20:50:43.984176] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.172 [2024-07-15 20:50:43.984186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.172 [2024-07-15 20:50:43.984194] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.172 [2024-07-15 20:50:43.984201] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.172 [2024-07-15 20:50:43.984231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.172 [2024-07-15 20:50:44.024908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:22.735 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.735 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:14:22.735 20:50:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.735 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:22.735 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.735 20:50:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.735 20:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:22.735 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.735 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.735 [2024-07-15 20:50:44.642294] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.992 [2024-07-15 20:50:44.650355] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.992 null0 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.992 null1 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75732 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75732 /tmp/host.sock 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 75732 ']' 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.992 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:22.992 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:22.993 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.993 20:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.993 [2024-07-15 20:50:44.733449] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:14:22.993 [2024-07-15 20:50:44.733515] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75732 ] 00:14:22.993 [2024-07-15 20:50:44.873946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.250 [2024-07-15 20:50:44.950624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.250 [2024-07-15 20:50:44.991756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:14:23.839 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.097 [2024-07-15 20:50:45.912548] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:24.097 20:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.355 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:14:24.356 20:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:14:24.921 [2024-07-15 20:50:46.591639] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:24.921 [2024-07-15 20:50:46.591665] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:24.921 [2024-07-15 20:50:46.591678] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:24.921 [2024-07-15 20:50:46.597661] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:14:24.921 [2024-07-15 20:50:46.654432] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:24.921 [2024-07-15 20:50:46.654459] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:14:25.487 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:25.488 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.747 [2024-07-15 20:50:47.443396] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:25.747 [2024-07-15 20:50:47.444441] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:25.747 [2024-07-15 20:50:47.444465] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:25.747 [2024-07-15 20:50:47.450431] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:14:25.747 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:25.748 [2024-07-15 20:50:47.513543] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:25.748 [2024-07-15 20:50:47.513677] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:25.748 [2024-07-15 20:50:47.513689] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:25.748 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:26.007 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:26.007 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:26.007 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:26.007 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.007 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.007 [2024-07-15 20:50:47.664335] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:26.007 [2024-07-15 20:50:47.664362] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:26.007 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.007 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:26.007 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:26.007 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:26.007 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:26.007 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:26.007 [2024-07-15 20:50:47.670271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.007 [2024-07-15 20:50:47.670299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.008 [2024-07-15 20:50:47.670310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.008 [2024-07-15 20:50:47.670319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.008 [2024-07-15 20:50:47.670329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.008 [2024-07-15 20:50:47.670337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.008 [2024-07-15 20:50:47.670346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.008 [2024-07-15 20:50:47.670354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.008 [2024-07-15 20:50:47.670363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1638640 is same with the state(5) to be set 00:14:26.008 [2024-07-15 20:50:47.670418] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:14:26.008 [2024-07-15 20:50:47.670433] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:26.008 [2024-07-15 20:50:47.670480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1638640 (9): Bad file descriptor 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:26.008 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.266 20:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.266 20:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.266 20:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:14:26.266 20:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:14:26.266 20:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:14:26.266 20:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:14:26.266 20:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:26.266 20:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.266 20:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.199 [2024-07-15 20:50:49.028408] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:27.199 [2024-07-15 20:50:49.028442] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:27.199 [2024-07-15 20:50:49.028457] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:27.199 [2024-07-15 20:50:49.034424] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:14:27.199 [2024-07-15 20:50:49.094501] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:27.199 [2024-07-15 20:50:49.094720] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:27.199 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.199 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:27.199 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:14:27.199 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:27.199 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:27.199 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.199 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:27.199 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.199 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:27.199 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.199 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.458 request: 00:14:27.458 { 00:14:27.458 "name": "nvme", 00:14:27.458 "trtype": "tcp", 00:14:27.458 "traddr": "10.0.0.2", 00:14:27.458 "adrfam": "ipv4", 00:14:27.458 "trsvcid": "8009", 00:14:27.458 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:27.458 "wait_for_attach": true, 00:14:27.458 "method": "bdev_nvme_start_discovery", 00:14:27.458 "req_id": 1 00:14:27.458 } 00:14:27.458 Got JSON-RPC error response 00:14:27.458 response: 00:14:27.458 { 00:14:27.458 "code": -17, 00:14:27.458 "message": "File exists" 00:14:27.458 } 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.458 request: 00:14:27.458 { 00:14:27.458 "name": "nvme_second", 00:14:27.458 "trtype": "tcp", 00:14:27.458 "traddr": "10.0.0.2", 00:14:27.458 "adrfam": "ipv4", 00:14:27.458 "trsvcid": "8009", 00:14:27.458 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:27.458 "wait_for_attach": true, 00:14:27.458 "method": "bdev_nvme_start_discovery", 00:14:27.458 "req_id": 1 00:14:27.458 } 00:14:27.458 Got JSON-RPC error response 00:14:27.458 response: 00:14:27.458 { 00:14:27.458 "code": -17, 00:14:27.458 "message": "File exists" 00:14:27.458 } 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.458 20:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:28.865 [2024-07-15 20:50:50.361584] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:28.865 [2024-07-15 20:50:50.361650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16758f0 with addr=10.0.0.2, port=8010 00:14:28.865 [2024-07-15 20:50:50.361672] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:28.865 [2024-07-15 20:50:50.361682] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:28.866 [2024-07-15 20:50:50.361691] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:14:29.800 [2024-07-15 20:50:51.359952] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:29.800 [2024-07-15 20:50:51.360009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16758f0 with addr=10.0.0.2, port=8010 00:14:29.800 [2024-07-15 20:50:51.360031] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:29.800 [2024-07-15 20:50:51.360041] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:29.800 [2024-07-15 20:50:51.360050] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:14:30.734 [2024-07-15 20:50:52.358215] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:14:30.734 request: 00:14:30.734 { 00:14:30.734 "name": "nvme_second", 00:14:30.734 "trtype": "tcp", 00:14:30.734 "traddr": "10.0.0.2", 00:14:30.734 "adrfam": "ipv4", 00:14:30.734 "trsvcid": "8010", 00:14:30.734 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:30.734 "wait_for_attach": false, 00:14:30.734 "attach_timeout_ms": 3000, 00:14:30.734 "method": "bdev_nvme_start_discovery", 00:14:30.734 "req_id": 1 00:14:30.734 } 00:14:30.734 Got JSON-RPC error response 00:14:30.734 response: 00:14:30.734 { 00:14:30.734 "code": -110, 00:14:30.734 "message": "Connection timed out" 00:14:30.734 } 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75732 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.734 rmmod nvme_tcp 00:14:30.734 rmmod nvme_fabrics 00:14:30.734 rmmod nvme_keyring 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75700 ']' 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75700 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 75700 ']' 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 75700 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75700 00:14:30.734 killing process with pid 75700 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75700' 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 75700 00:14:30.734 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 75700 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:30.992 ************************************ 00:14:30.992 END TEST nvmf_host_discovery 00:14:30.992 ************************************ 00:14:30.992 00:14:30.992 real 0m9.615s 00:14:30.992 user 0m17.845s 00:14:30.992 sys 0m2.383s 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.992 20:50:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:30.992 20:50:52 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:30.992 20:50:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:30.992 20:50:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.992 20:50:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.992 ************************************ 00:14:30.992 START TEST nvmf_host_multipath_status 00:14:30.992 ************************************ 00:14:30.992 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:31.251 * Looking for test storage... 00:14:31.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.251 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.252 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.252 20:50:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:31.252 Cannot find device "nvmf_tgt_br" 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.252 Cannot find device "nvmf_tgt_br2" 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:31.252 Cannot find device "nvmf_tgt_br" 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:31.252 Cannot find device "nvmf_tgt_br2" 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:31.252 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:31.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:14:31.511 00:14:31.511 --- 10.0.0.2 ping statistics --- 00:14:31.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.511 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:31.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:14:31.511 00:14:31.511 --- 10.0.0.3 ping statistics --- 00:14:31.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.511 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:14:31.511 00:14:31.511 --- 10.0.0.1 ping statistics --- 00:14:31.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.511 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.511 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76175 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76175 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76175 ']' 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:31.772 20:50:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:31.772 [2024-07-15 20:50:53.495428] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:14:31.772 [2024-07-15 20:50:53.495635] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.772 [2024-07-15 20:50:53.639193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:32.030 [2024-07-15 20:50:53.715342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.030 [2024-07-15 20:50:53.715388] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.030 [2024-07-15 20:50:53.715398] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.030 [2024-07-15 20:50:53.715406] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.030 [2024-07-15 20:50:53.715413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.030 [2024-07-15 20:50:53.715607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.030 [2024-07-15 20:50:53.715649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.030 [2024-07-15 20:50:53.756749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:32.596 20:50:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.596 20:50:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:14:32.596 20:50:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.596 20:50:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.596 20:50:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:32.596 20:50:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.596 20:50:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76175 00:14:32.596 20:50:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:32.854 [2024-07-15 20:50:54.534094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.854 20:50:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:32.854 Malloc0 00:14:33.112 20:50:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:14:33.112 20:50:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:33.370 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.370 [2024-07-15 20:50:55.279410] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.629 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:33.629 [2024-07-15 20:50:55.447223] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:33.629 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76224 00:14:33.629 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:33.629 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:33.629 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76224 /var/tmp/bdevperf.sock 00:14:33.629 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76224 ']' 00:14:33.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.629 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.629 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.629 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.629 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.629 20:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:34.566 20:50:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.566 20:50:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:14:34.566 20:50:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:34.825 20:50:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:14:35.084 Nvme0n1 00:14:35.084 20:50:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:35.342 Nvme0n1 00:14:35.342 20:50:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:14:35.342 20:50:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:37.297 20:50:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:14:37.297 20:50:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:14:37.554 20:50:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:14:37.554 20:50:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:14:38.926 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:14:38.926 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:38.926 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:38.926 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:38.926 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:38.926 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:38.926 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:38.926 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:39.184 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:39.184 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:39.184 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:39.184 20:51:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:39.184 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:39.184 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:39.184 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:39.184 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:39.442 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:39.442 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:39.442 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:39.442 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:39.699 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:39.699 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:39.699 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:39.699 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:39.956 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:39.956 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:14:39.956 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:39.956 20:51:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:14:40.212 20:51:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:14:41.187 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:14:41.187 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:41.187 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:41.187 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:41.444 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:41.444 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:41.444 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:41.444 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:41.701 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:41.701 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:41.701 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:41.701 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:41.701 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:41.701 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:41.701 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:41.701 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:41.957 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:41.957 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:41.957 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:41.957 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:42.215 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:42.215 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:42.215 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:42.215 20:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:42.472 20:51:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:42.472 20:51:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:14:42.472 20:51:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:42.472 20:51:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:14:42.730 20:51:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:14:43.664 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:14:43.664 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:43.664 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.664 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:43.941 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:43.941 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:43.941 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.941 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:44.200 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:44.200 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:44.200 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:44.200 20:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:44.458 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:44.458 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:44.458 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:44.458 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:44.458 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:44.458 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:44.458 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:44.458 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:44.716 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:44.716 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:44.716 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:44.716 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:44.975 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:44.975 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:14:44.975 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:45.233 20:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:14:45.233 20:51:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:14:46.609 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:14:46.609 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:46.609 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:46.610 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.610 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.610 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:46.610 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:46.610 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.868 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:46.868 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:46.868 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:46.868 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.868 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.868 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:46.868 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.868 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:47.126 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:47.126 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:47.126 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:47.126 20:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:47.384 20:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:47.384 20:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:14:47.384 20:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:47.384 20:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:47.642 20:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:47.642 20:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:14:47.642 20:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:47.642 20:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:14:47.899 20:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:14:48.832 20:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:14:48.832 20:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:48.832 20:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:48.832 20:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:49.088 20:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:49.088 20:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:49.088 20:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.088 20:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:49.397 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:49.397 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:49.397 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.397 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:49.397 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:49.397 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:49.397 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.397 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:49.655 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:49.655 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:14:49.655 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.655 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:49.913 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:49.913 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:14:49.913 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:49.913 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.913 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:49.913 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:14:49.913 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:50.171 20:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:14:50.429 20:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:14:51.363 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:14:51.363 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:51.363 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:51.363 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:51.630 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:51.630 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:51.630 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:51.630 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:51.895 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:51.895 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:51.895 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:51.895 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:51.895 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:51.895 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:51.895 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:51.895 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:52.152 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:52.152 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:14:52.152 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:52.152 20:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:52.410 20:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:52.410 20:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:52.410 20:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:52.410 20:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:52.410 20:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:52.410 20:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:14:52.668 20:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:14:52.668 20:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:14:52.926 20:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:14:53.185 20:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:14:54.121 20:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:14:54.121 20:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:54.121 20:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:54.121 20:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:54.382 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:54.382 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:54.382 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:54.382 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:54.382 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:54.382 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:54.382 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:54.382 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:54.642 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:54.642 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:54.642 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:54.642 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:54.901 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:54.901 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:54.901 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:54.901 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:54.901 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:54.901 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:55.160 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:55.160 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:55.160 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:55.160 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:14:55.160 20:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:55.419 20:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:14:55.678 20:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:14:56.614 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:14:56.614 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:56.614 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:56.614 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:56.872 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:56.872 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:56.872 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:56.872 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:56.872 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:56.873 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:56.873 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:56.873 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:57.129 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:57.129 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:57.129 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:57.129 20:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:57.386 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:57.386 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:57.386 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:57.386 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:57.643 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:57.643 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:57.643 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:57.643 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:57.643 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:57.644 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:14:57.644 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:57.901 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:14:58.158 20:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:14:59.104 20:51:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:14:59.104 20:51:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:59.104 20:51:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.104 20:51:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:59.402 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:59.402 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:59.402 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.402 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:59.402 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:59.402 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:59.402 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.402 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:59.658 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:59.658 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:59.659 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.659 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:59.917 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:59.917 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:59.917 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:59.917 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:00.174 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:00.174 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:00.174 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:00.174 20:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:00.174 20:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:00.174 20:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:15:00.174 20:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:00.430 20:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:00.687 20:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:15:01.617 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:15:01.617 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:01.617 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:01.617 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:01.874 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:01.874 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:01.874 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:01.874 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:02.132 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:02.132 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:02.132 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.132 20:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:02.132 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:02.132 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:02.132 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.132 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:02.389 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:02.389 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:02.389 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:02.389 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.646 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:02.646 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:02.646 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:02.646 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76224 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76224 ']' 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76224 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76224 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:02.904 killing process with pid 76224 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76224' 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76224 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76224 00:15:02.904 Connection closed with partial response: 00:15:02.904 00:15:02.904 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76224 00:15:02.904 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:03.164 [2024-07-15 20:50:55.495492] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:15:03.164 [2024-07-15 20:50:55.495565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76224 ] 00:15:03.164 [2024-07-15 20:50:55.636728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.164 [2024-07-15 20:50:55.720448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.164 [2024-07-15 20:50:55.761136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:03.164 Running I/O for 90 seconds... 00:15:03.164 [2024-07-15 20:51:09.461685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.461752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.461798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.461812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.461831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.461843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.461860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.461873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.461890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.461902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.461920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.461932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.461949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.461961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.461979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.461991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.164 [2024-07-15 20:51:09.462399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.164 [2024-07-15 20:51:09.462429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:03.164 [2024-07-15 20:51:09.462446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.164 [2024-07-15 20:51:09.462458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.462494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.462524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.462555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.462585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.462615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.462645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.462676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.462707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.462737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.462767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.462802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.462832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.462863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.462899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.462929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.462959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.462977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.462989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.463019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.463049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.463079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.463109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.463139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.463177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.463207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.463943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.463972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.463990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.464008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.464056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.464086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.464117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.464147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.464188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.464218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.464248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.464278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.165 [2024-07-15 20:51:09.464309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.464340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.464370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:03.165 [2024-07-15 20:51:09.464388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.165 [2024-07-15 20:51:09.464405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.464435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.464465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.464496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.464526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.464559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.464973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.464986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.465016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.465047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.465078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.465108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.465138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.465180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.465210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.465241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:09.465787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.465826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.465863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.465900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.465937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.465973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.465997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.466009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.466045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.466058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.466092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.466105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.466131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.466154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.466188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.466205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.466229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.466242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.466266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.466278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.466302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.466314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.466339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.466351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.466375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.466387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:09.466411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:09.466423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.397382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.397438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.397469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.397499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.397547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.397578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.397607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.397637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.397667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.397696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.397726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.397755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.397785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.397814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.397844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.397874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.397917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.397957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.397974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.397987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.398004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.398016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.398043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.398056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.398073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.398085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.398103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.398115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.398133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.398145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.398162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.398184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.398202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.398214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.398232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.398244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.398261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.166 [2024-07-15 20:51:22.398273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:03.166 [2024-07-15 20:51:22.398291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.166 [2024-07-15 20:51:22.398303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.398321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.167 [2024-07-15 20:51:22.398338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.398356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.167 [2024-07-15 20:51:22.398368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.398386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.167 [2024-07-15 20:51:22.398398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.167 [2024-07-15 20:51:22.399216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.167 [2024-07-15 20:51:22.399253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.167 [2024-07-15 20:51:22.399284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.167 [2024-07-15 20:51:22.399314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.167 [2024-07-15 20:51:22.399345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.167 [2024-07-15 20:51:22.399375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.167 [2024-07-15 20:51:22.399406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.167 [2024-07-15 20:51:22.399437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.167 [2024-07-15 20:51:22.399467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.167 [2024-07-15 20:51:22.399507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.167 [2024-07-15 20:51:22.399538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:03.167 [2024-07-15 20:51:22.399566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:03.167 [2024-07-15 20:51:22.399580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.167 Received shutdown signal, test time was about 27.483677 seconds 00:15:03.167 00:15:03.167 Latency(us) 00:15:03.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.167 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:03.167 Verification LBA range: start 0x0 length 0x4000 00:15:03.167 Nvme0n1 : 27.48 11119.77 43.44 0.00 0.00 11490.12 294.45 3018551.31 00:15:03.167 =================================================================================================================== 00:15:03.167 Total : 11119.77 43.44 0.00 0.00 11490.12 294.45 3018551.31 00:15:03.167 20:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.167 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:15:03.167 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:03.167 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:15:03.167 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:03.167 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:15:03.167 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:03.167 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:15:03.167 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:03.167 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:03.167 rmmod nvme_tcp 00:15:03.426 rmmod nvme_fabrics 00:15:03.426 rmmod nvme_keyring 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76175 ']' 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76175 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76175 ']' 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76175 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76175 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:03.426 killing process with pid 76175 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76175' 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76175 00:15:03.426 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76175 00:15:03.685 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:03.685 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:03.685 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:03.685 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.685 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:03.685 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.685 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.685 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.685 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:03.685 00:15:03.685 real 0m32.572s 00:15:03.685 user 1m40.297s 00:15:03.685 sys 0m11.777s 00:15:03.685 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.685 ************************************ 00:15:03.685 END TEST nvmf_host_multipath_status 00:15:03.685 20:51:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:03.685 ************************************ 00:15:03.685 20:51:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:03.685 20:51:25 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:03.685 20:51:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:03.685 20:51:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.685 20:51:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:03.685 ************************************ 00:15:03.685 START TEST nvmf_discovery_remove_ifc 00:15:03.685 ************************************ 00:15:03.685 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:03.944 * Looking for test storage... 00:15:03.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.944 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:03.945 Cannot find device "nvmf_tgt_br" 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:03.945 Cannot find device "nvmf_tgt_br2" 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:03.945 Cannot find device "nvmf_tgt_br" 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:03.945 Cannot find device "nvmf_tgt_br2" 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:03.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:03.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:15:03.945 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:04.203 20:51:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:04.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:15:04.203 00:15:04.203 --- 10.0.0.2 ping statistics --- 00:15:04.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.203 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:04.203 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:04.203 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.026 ms 00:15:04.203 00:15:04.203 --- 10.0.0.3 ping statistics --- 00:15:04.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.203 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:04.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:15:04.203 00:15:04.203 --- 10.0.0.1 ping statistics --- 00:15:04.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.203 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.203 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=76935 00:15:04.204 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 76935 00:15:04.204 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 76935 ']' 00:15:04.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.204 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.204 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:04.204 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.204 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:04.204 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:04.462 [2024-07-15 20:51:26.113824] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:15:04.462 [2024-07-15 20:51:26.113881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.462 [2024-07-15 20:51:26.252976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.462 [2024-07-15 20:51:26.329368] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.462 [2024-07-15 20:51:26.329432] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.462 [2024-07-15 20:51:26.329442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.462 [2024-07-15 20:51:26.329450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.462 [2024-07-15 20:51:26.329456] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.462 [2024-07-15 20:51:26.329489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.462 [2024-07-15 20:51:26.370381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:05.397 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:05.397 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:15:05.397 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:05.397 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:05.397 20:51:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:05.397 [2024-07-15 20:51:27.020608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.397 [2024-07-15 20:51:27.028692] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:05.397 null0 00:15:05.397 [2024-07-15 20:51:27.060591] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76979 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76979 /tmp/host.sock 00:15:05.397 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 76979 ']' 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:05.397 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:05.397 [2024-07-15 20:51:27.135916] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:15:05.397 [2024-07-15 20:51:27.135978] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76979 ] 00:15:05.397 [2024-07-15 20:51:27.275852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.654 [2024-07-15 20:51:27.358865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.218 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.218 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:15:06.218 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:06.218 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:06.218 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.218 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:06.219 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.219 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:06.219 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.219 20:51:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:06.219 [2024-07-15 20:51:28.010859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:06.219 20:51:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.219 20:51:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:06.219 20:51:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.219 20:51:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:07.182 [2024-07-15 20:51:29.049894] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:07.182 [2024-07-15 20:51:29.049932] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:07.182 [2024-07-15 20:51:29.049945] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:07.182 [2024-07-15 20:51:29.055923] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:07.440 [2024-07-15 20:51:29.112613] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:07.440 [2024-07-15 20:51:29.112676] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:07.440 [2024-07-15 20:51:29.112698] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:07.440 [2024-07-15 20:51:29.112712] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:07.440 [2024-07-15 20:51:29.112733] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:07.440 [2024-07-15 20:51:29.118485] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc8aed0 was disconnected and freed. delete nvme_qpair. 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:07.440 20:51:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:08.374 20:51:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:08.374 20:51:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:08.374 20:51:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.374 20:51:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:08.374 20:51:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:08.374 20:51:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:08.374 20:51:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:08.374 20:51:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.634 20:51:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:08.634 20:51:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:09.603 20:51:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:09.603 20:51:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:09.603 20:51:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:09.603 20:51:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.603 20:51:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:09.603 20:51:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:09.603 20:51:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:09.603 20:51:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.603 20:51:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:09.603 20:51:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:10.538 20:51:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:10.538 20:51:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:10.538 20:51:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.538 20:51:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:10.538 20:51:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:10.538 20:51:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:10.538 20:51:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:10.538 20:51:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.538 20:51:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:10.538 20:51:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:11.913 20:51:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:11.913 20:51:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:11.913 20:51:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:11.913 20:51:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.913 20:51:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:11.913 20:51:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:11.913 20:51:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:11.913 20:51:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.913 20:51:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:11.913 20:51:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:12.853 20:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:12.853 20:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:12.853 20:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:12.853 20:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.853 20:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:12.853 20:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:12.853 20:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:12.853 20:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.853 20:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:12.853 20:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:12.853 [2024-07-15 20:51:34.543042] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:12.853 [2024-07-15 20:51:34.543100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.853 [2024-07-15 20:51:34.543113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.853 [2024-07-15 20:51:34.543126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.853 [2024-07-15 20:51:34.543135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.853 [2024-07-15 20:51:34.543144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.853 [2024-07-15 20:51:34.543153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.853 [2024-07-15 20:51:34.543162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.853 [2024-07-15 20:51:34.543178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.853 [2024-07-15 20:51:34.543187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.853 [2024-07-15 20:51:34.543195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.853 [2024-07-15 20:51:34.543204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0ac0 is same with the state(5) to be set 00:15:12.853 [2024-07-15 20:51:34.553020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf0ac0 (9): Bad file descriptor 00:15:12.853 [2024-07-15 20:51:34.563021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:13.787 20:51:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:13.787 20:51:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:13.787 20:51:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:13.787 20:51:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.787 20:51:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:13.787 20:51:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:13.787 20:51:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:13.787 [2024-07-15 20:51:35.571258] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:13.787 [2024-07-15 20:51:35.571412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf0ac0 with addr=10.0.0.2, port=4420 00:15:13.787 [2024-07-15 20:51:35.571459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0ac0 is same with the state(5) to be set 00:15:13.787 [2024-07-15 20:51:35.571546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf0ac0 (9): Bad file descriptor 00:15:13.787 [2024-07-15 20:51:35.572569] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:13.787 [2024-07-15 20:51:35.572651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:13.787 [2024-07-15 20:51:35.572681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:13.787 [2024-07-15 20:51:35.572713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:13.787 [2024-07-15 20:51:35.572786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:13.787 [2024-07-15 20:51:35.572816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:13.787 20:51:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.787 20:51:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:13.787 20:51:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:14.724 [2024-07-15 20:51:36.571267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:14.724 [2024-07-15 20:51:36.571322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:14.724 [2024-07-15 20:51:36.571332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:14.724 [2024-07-15 20:51:36.571343] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:15:14.724 [2024-07-15 20:51:36.571362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:14.724 [2024-07-15 20:51:36.571388] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:14.724 [2024-07-15 20:51:36.571435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.724 [2024-07-15 20:51:36.571448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.724 [2024-07-15 20:51:36.571461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.724 [2024-07-15 20:51:36.571470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.724 [2024-07-15 20:51:36.571479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.724 [2024-07-15 20:51:36.571488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.724 [2024-07-15 20:51:36.571497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.724 [2024-07-15 20:51:36.571506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.724 [2024-07-15 20:51:36.571514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.724 [2024-07-15 20:51:36.571523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.724 [2024-07-15 20:51:36.571532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:14.724 [2024-07-15 20:51:36.572182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf4860 (9): Bad file descriptor 00:15:14.724 [2024-07-15 20:51:36.573191] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:14.724 [2024-07-15 20:51:36.573206] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:14.724 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:14.724 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:14.724 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:14.724 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.724 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:14.724 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:14.724 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:14.724 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:14.983 20:51:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:15.918 20:51:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:15.918 20:51:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:15.918 20:51:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:15.918 20:51:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.918 20:51:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:15.918 20:51:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:15.918 20:51:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:15.918 20:51:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.918 20:51:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:15.918 20:51:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:16.853 [2024-07-15 20:51:38.578796] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:16.853 [2024-07-15 20:51:38.578829] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:16.853 [2024-07-15 20:51:38.578843] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:16.853 [2024-07-15 20:51:38.584817] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:15:16.853 [2024-07-15 20:51:38.640627] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:16.854 [2024-07-15 20:51:38.640681] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:16.854 [2024-07-15 20:51:38.640700] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:16.854 [2024-07-15 20:51:38.640715] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:15:16.854 [2024-07-15 20:51:38.640725] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:16.854 [2024-07-15 20:51:38.647521] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc9ab70 was disconnected and freed. delete nvme_qpair. 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76979 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 76979 ']' 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 76979 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76979 00:15:17.113 killing process with pid 76979 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76979' 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 76979 00:15:17.113 20:51:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 76979 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.372 rmmod nvme_tcp 00:15:17.372 rmmod nvme_fabrics 00:15:17.372 rmmod nvme_keyring 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 76935 ']' 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 76935 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 76935 ']' 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 76935 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76935 00:15:17.372 killing process with pid 76935 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76935' 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 76935 00:15:17.372 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 76935 00:15:17.631 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:17.631 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:17.631 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:17.631 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.631 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:17.631 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.631 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.631 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.631 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:17.631 00:15:17.631 real 0m13.985s 00:15:17.631 user 0m23.448s 00:15:17.631 sys 0m3.073s 00:15:17.631 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.631 20:51:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:17.631 ************************************ 00:15:17.631 END TEST nvmf_discovery_remove_ifc 00:15:17.631 ************************************ 00:15:17.631 20:51:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:17.631 20:51:39 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:17.631 20:51:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:17.631 20:51:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.631 20:51:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.891 ************************************ 00:15:17.891 START TEST nvmf_identify_kernel_target 00:15:17.891 ************************************ 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:17.891 * Looking for test storage... 00:15:17.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:17.891 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:17.892 Cannot find device "nvmf_tgt_br" 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.892 Cannot find device "nvmf_tgt_br2" 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:17.892 Cannot find device "nvmf_tgt_br" 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:17.892 Cannot find device "nvmf_tgt_br2" 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:15:17.892 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:18.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:15:18.151 00:15:18.151 --- 10.0.0.2 ping statistics --- 00:15:18.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.151 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:18.151 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:18.151 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:18.151 00:15:18.151 --- 10.0.0.3 ping statistics --- 00:15:18.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.151 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:18.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:18.151 00:15:18.151 --- 10.0.0.1 ping statistics --- 00:15:18.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.151 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.151 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:18.152 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:18.152 20:51:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:18.152 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:18.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:18.719 Waiting for block devices as requested 00:15:18.979 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:18.979 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:18.979 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:18.979 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:18.979 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:15:18.979 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:15:18.979 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:18.979 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:18.979 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:15:18.979 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:18.979 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:19.239 No valid GPT data, bailing 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:19.239 No valid GPT data, bailing 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:15:19.239 20:51:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:19.239 No valid GPT data, bailing 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:19.239 No valid GPT data, bailing 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:19.239 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid=69e37e11-dc2b-47bc-a2e9-49065053d84e -a 10.0.0.1 -t tcp -s 4420 00:15:19.498 00:15:19.498 Discovery Log Number of Records 2, Generation counter 2 00:15:19.498 =====Discovery Log Entry 0====== 00:15:19.498 trtype: tcp 00:15:19.498 adrfam: ipv4 00:15:19.498 subtype: current discovery subsystem 00:15:19.498 treq: not specified, sq flow control disable supported 00:15:19.498 portid: 1 00:15:19.498 trsvcid: 4420 00:15:19.498 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:19.498 traddr: 10.0.0.1 00:15:19.498 eflags: none 00:15:19.498 sectype: none 00:15:19.498 =====Discovery Log Entry 1====== 00:15:19.498 trtype: tcp 00:15:19.498 adrfam: ipv4 00:15:19.498 subtype: nvme subsystem 00:15:19.498 treq: not specified, sq flow control disable supported 00:15:19.498 portid: 1 00:15:19.498 trsvcid: 4420 00:15:19.498 subnqn: nqn.2016-06.io.spdk:testnqn 00:15:19.498 traddr: 10.0.0.1 00:15:19.498 eflags: none 00:15:19.498 sectype: none 00:15:19.498 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:15:19.498 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:15:19.498 ===================================================== 00:15:19.498 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:19.498 ===================================================== 00:15:19.498 Controller Capabilities/Features 00:15:19.498 ================================ 00:15:19.498 Vendor ID: 0000 00:15:19.498 Subsystem Vendor ID: 0000 00:15:19.498 Serial Number: 4f44bb1d61d777e8ca30 00:15:19.498 Model Number: Linux 00:15:19.498 Firmware Version: 6.7.0-68 00:15:19.498 Recommended Arb Burst: 0 00:15:19.498 IEEE OUI Identifier: 00 00 00 00:15:19.498 Multi-path I/O 00:15:19.498 May have multiple subsystem ports: No 00:15:19.498 May have multiple controllers: No 00:15:19.498 Associated with SR-IOV VF: No 00:15:19.498 Max Data Transfer Size: Unlimited 00:15:19.498 Max Number of Namespaces: 0 00:15:19.498 Max Number of I/O Queues: 1024 00:15:19.498 NVMe Specification Version (VS): 1.3 00:15:19.498 NVMe Specification Version (Identify): 1.3 00:15:19.499 Maximum Queue Entries: 1024 00:15:19.499 Contiguous Queues Required: No 00:15:19.499 Arbitration Mechanisms Supported 00:15:19.499 Weighted Round Robin: Not Supported 00:15:19.499 Vendor Specific: Not Supported 00:15:19.499 Reset Timeout: 7500 ms 00:15:19.499 Doorbell Stride: 4 bytes 00:15:19.499 NVM Subsystem Reset: Not Supported 00:15:19.499 Command Sets Supported 00:15:19.499 NVM Command Set: Supported 00:15:19.499 Boot Partition: Not Supported 00:15:19.499 Memory Page Size Minimum: 4096 bytes 00:15:19.499 Memory Page Size Maximum: 4096 bytes 00:15:19.499 Persistent Memory Region: Not Supported 00:15:19.499 Optional Asynchronous Events Supported 00:15:19.499 Namespace Attribute Notices: Not Supported 00:15:19.499 Firmware Activation Notices: Not Supported 00:15:19.499 ANA Change Notices: Not Supported 00:15:19.499 PLE Aggregate Log Change Notices: Not Supported 00:15:19.499 LBA Status Info Alert Notices: Not Supported 00:15:19.499 EGE Aggregate Log Change Notices: Not Supported 00:15:19.499 Normal NVM Subsystem Shutdown event: Not Supported 00:15:19.499 Zone Descriptor Change Notices: Not Supported 00:15:19.499 Discovery Log Change Notices: Supported 00:15:19.499 Controller Attributes 00:15:19.499 128-bit Host Identifier: Not Supported 00:15:19.499 Non-Operational Permissive Mode: Not Supported 00:15:19.499 NVM Sets: Not Supported 00:15:19.499 Read Recovery Levels: Not Supported 00:15:19.499 Endurance Groups: Not Supported 00:15:19.499 Predictable Latency Mode: Not Supported 00:15:19.499 Traffic Based Keep ALive: Not Supported 00:15:19.499 Namespace Granularity: Not Supported 00:15:19.499 SQ Associations: Not Supported 00:15:19.499 UUID List: Not Supported 00:15:19.499 Multi-Domain Subsystem: Not Supported 00:15:19.499 Fixed Capacity Management: Not Supported 00:15:19.499 Variable Capacity Management: Not Supported 00:15:19.499 Delete Endurance Group: Not Supported 00:15:19.499 Delete NVM Set: Not Supported 00:15:19.499 Extended LBA Formats Supported: Not Supported 00:15:19.499 Flexible Data Placement Supported: Not Supported 00:15:19.499 00:15:19.499 Controller Memory Buffer Support 00:15:19.499 ================================ 00:15:19.499 Supported: No 00:15:19.499 00:15:19.499 Persistent Memory Region Support 00:15:19.499 ================================ 00:15:19.499 Supported: No 00:15:19.499 00:15:19.499 Admin Command Set Attributes 00:15:19.499 ============================ 00:15:19.499 Security Send/Receive: Not Supported 00:15:19.499 Format NVM: Not Supported 00:15:19.499 Firmware Activate/Download: Not Supported 00:15:19.499 Namespace Management: Not Supported 00:15:19.499 Device Self-Test: Not Supported 00:15:19.499 Directives: Not Supported 00:15:19.499 NVMe-MI: Not Supported 00:15:19.499 Virtualization Management: Not Supported 00:15:19.499 Doorbell Buffer Config: Not Supported 00:15:19.499 Get LBA Status Capability: Not Supported 00:15:19.499 Command & Feature Lockdown Capability: Not Supported 00:15:19.499 Abort Command Limit: 1 00:15:19.499 Async Event Request Limit: 1 00:15:19.499 Number of Firmware Slots: N/A 00:15:19.499 Firmware Slot 1 Read-Only: N/A 00:15:19.759 Firmware Activation Without Reset: N/A 00:15:19.759 Multiple Update Detection Support: N/A 00:15:19.759 Firmware Update Granularity: No Information Provided 00:15:19.759 Per-Namespace SMART Log: No 00:15:19.759 Asymmetric Namespace Access Log Page: Not Supported 00:15:19.759 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:19.759 Command Effects Log Page: Not Supported 00:15:19.759 Get Log Page Extended Data: Supported 00:15:19.759 Telemetry Log Pages: Not Supported 00:15:19.759 Persistent Event Log Pages: Not Supported 00:15:19.759 Supported Log Pages Log Page: May Support 00:15:19.759 Commands Supported & Effects Log Page: Not Supported 00:15:19.759 Feature Identifiers & Effects Log Page:May Support 00:15:19.759 NVMe-MI Commands & Effects Log Page: May Support 00:15:19.759 Data Area 4 for Telemetry Log: Not Supported 00:15:19.759 Error Log Page Entries Supported: 1 00:15:19.759 Keep Alive: Not Supported 00:15:19.759 00:15:19.759 NVM Command Set Attributes 00:15:19.759 ========================== 00:15:19.759 Submission Queue Entry Size 00:15:19.759 Max: 1 00:15:19.759 Min: 1 00:15:19.759 Completion Queue Entry Size 00:15:19.759 Max: 1 00:15:19.759 Min: 1 00:15:19.759 Number of Namespaces: 0 00:15:19.759 Compare Command: Not Supported 00:15:19.759 Write Uncorrectable Command: Not Supported 00:15:19.759 Dataset Management Command: Not Supported 00:15:19.759 Write Zeroes Command: Not Supported 00:15:19.759 Set Features Save Field: Not Supported 00:15:19.759 Reservations: Not Supported 00:15:19.759 Timestamp: Not Supported 00:15:19.759 Copy: Not Supported 00:15:19.759 Volatile Write Cache: Not Present 00:15:19.759 Atomic Write Unit (Normal): 1 00:15:19.759 Atomic Write Unit (PFail): 1 00:15:19.759 Atomic Compare & Write Unit: 1 00:15:19.759 Fused Compare & Write: Not Supported 00:15:19.759 Scatter-Gather List 00:15:19.759 SGL Command Set: Supported 00:15:19.759 SGL Keyed: Not Supported 00:15:19.759 SGL Bit Bucket Descriptor: Not Supported 00:15:19.759 SGL Metadata Pointer: Not Supported 00:15:19.759 Oversized SGL: Not Supported 00:15:19.759 SGL Metadata Address: Not Supported 00:15:19.759 SGL Offset: Supported 00:15:19.759 Transport SGL Data Block: Not Supported 00:15:19.759 Replay Protected Memory Block: Not Supported 00:15:19.759 00:15:19.759 Firmware Slot Information 00:15:19.759 ========================= 00:15:19.759 Active slot: 0 00:15:19.759 00:15:19.759 00:15:19.759 Error Log 00:15:19.759 ========= 00:15:19.759 00:15:19.759 Active Namespaces 00:15:19.759 ================= 00:15:19.759 Discovery Log Page 00:15:19.759 ================== 00:15:19.759 Generation Counter: 2 00:15:19.759 Number of Records: 2 00:15:19.759 Record Format: 0 00:15:19.759 00:15:19.759 Discovery Log Entry 0 00:15:19.759 ---------------------- 00:15:19.759 Transport Type: 3 (TCP) 00:15:19.759 Address Family: 1 (IPv4) 00:15:19.760 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:19.760 Entry Flags: 00:15:19.760 Duplicate Returned Information: 0 00:15:19.760 Explicit Persistent Connection Support for Discovery: 0 00:15:19.760 Transport Requirements: 00:15:19.760 Secure Channel: Not Specified 00:15:19.760 Port ID: 1 (0x0001) 00:15:19.760 Controller ID: 65535 (0xffff) 00:15:19.760 Admin Max SQ Size: 32 00:15:19.760 Transport Service Identifier: 4420 00:15:19.760 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:19.760 Transport Address: 10.0.0.1 00:15:19.760 Discovery Log Entry 1 00:15:19.760 ---------------------- 00:15:19.760 Transport Type: 3 (TCP) 00:15:19.760 Address Family: 1 (IPv4) 00:15:19.760 Subsystem Type: 2 (NVM Subsystem) 00:15:19.760 Entry Flags: 00:15:19.760 Duplicate Returned Information: 0 00:15:19.760 Explicit Persistent Connection Support for Discovery: 0 00:15:19.760 Transport Requirements: 00:15:19.760 Secure Channel: Not Specified 00:15:19.760 Port ID: 1 (0x0001) 00:15:19.760 Controller ID: 65535 (0xffff) 00:15:19.760 Admin Max SQ Size: 32 00:15:19.760 Transport Service Identifier: 4420 00:15:19.760 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:15:19.760 Transport Address: 10.0.0.1 00:15:19.760 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:15:19.760 get_feature(0x01) failed 00:15:19.760 get_feature(0x02) failed 00:15:19.760 get_feature(0x04) failed 00:15:19.760 ===================================================== 00:15:19.760 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:15:19.760 ===================================================== 00:15:19.760 Controller Capabilities/Features 00:15:19.760 ================================ 00:15:19.760 Vendor ID: 0000 00:15:19.760 Subsystem Vendor ID: 0000 00:15:19.760 Serial Number: 93ef5b2873f2ceb64bca 00:15:19.760 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:15:19.760 Firmware Version: 6.7.0-68 00:15:19.760 Recommended Arb Burst: 6 00:15:19.760 IEEE OUI Identifier: 00 00 00 00:15:19.760 Multi-path I/O 00:15:19.760 May have multiple subsystem ports: Yes 00:15:19.760 May have multiple controllers: Yes 00:15:19.760 Associated with SR-IOV VF: No 00:15:19.760 Max Data Transfer Size: Unlimited 00:15:19.760 Max Number of Namespaces: 1024 00:15:19.760 Max Number of I/O Queues: 128 00:15:19.760 NVMe Specification Version (VS): 1.3 00:15:19.760 NVMe Specification Version (Identify): 1.3 00:15:19.760 Maximum Queue Entries: 1024 00:15:19.760 Contiguous Queues Required: No 00:15:19.760 Arbitration Mechanisms Supported 00:15:19.760 Weighted Round Robin: Not Supported 00:15:19.760 Vendor Specific: Not Supported 00:15:19.760 Reset Timeout: 7500 ms 00:15:19.760 Doorbell Stride: 4 bytes 00:15:19.760 NVM Subsystem Reset: Not Supported 00:15:19.760 Command Sets Supported 00:15:19.760 NVM Command Set: Supported 00:15:19.760 Boot Partition: Not Supported 00:15:19.760 Memory Page Size Minimum: 4096 bytes 00:15:19.760 Memory Page Size Maximum: 4096 bytes 00:15:19.760 Persistent Memory Region: Not Supported 00:15:19.760 Optional Asynchronous Events Supported 00:15:19.760 Namespace Attribute Notices: Supported 00:15:19.760 Firmware Activation Notices: Not Supported 00:15:19.760 ANA Change Notices: Supported 00:15:19.760 PLE Aggregate Log Change Notices: Not Supported 00:15:19.760 LBA Status Info Alert Notices: Not Supported 00:15:19.760 EGE Aggregate Log Change Notices: Not Supported 00:15:19.760 Normal NVM Subsystem Shutdown event: Not Supported 00:15:19.760 Zone Descriptor Change Notices: Not Supported 00:15:19.760 Discovery Log Change Notices: Not Supported 00:15:19.760 Controller Attributes 00:15:19.760 128-bit Host Identifier: Supported 00:15:19.760 Non-Operational Permissive Mode: Not Supported 00:15:19.760 NVM Sets: Not Supported 00:15:19.760 Read Recovery Levels: Not Supported 00:15:19.760 Endurance Groups: Not Supported 00:15:19.760 Predictable Latency Mode: Not Supported 00:15:19.760 Traffic Based Keep ALive: Supported 00:15:19.760 Namespace Granularity: Not Supported 00:15:19.760 SQ Associations: Not Supported 00:15:19.760 UUID List: Not Supported 00:15:19.760 Multi-Domain Subsystem: Not Supported 00:15:19.760 Fixed Capacity Management: Not Supported 00:15:19.760 Variable Capacity Management: Not Supported 00:15:19.760 Delete Endurance Group: Not Supported 00:15:19.760 Delete NVM Set: Not Supported 00:15:19.760 Extended LBA Formats Supported: Not Supported 00:15:19.760 Flexible Data Placement Supported: Not Supported 00:15:19.760 00:15:19.760 Controller Memory Buffer Support 00:15:19.760 ================================ 00:15:19.760 Supported: No 00:15:19.760 00:15:19.760 Persistent Memory Region Support 00:15:19.760 ================================ 00:15:19.760 Supported: No 00:15:19.760 00:15:19.760 Admin Command Set Attributes 00:15:19.760 ============================ 00:15:19.760 Security Send/Receive: Not Supported 00:15:19.760 Format NVM: Not Supported 00:15:19.760 Firmware Activate/Download: Not Supported 00:15:19.760 Namespace Management: Not Supported 00:15:19.760 Device Self-Test: Not Supported 00:15:19.760 Directives: Not Supported 00:15:19.760 NVMe-MI: Not Supported 00:15:19.760 Virtualization Management: Not Supported 00:15:19.760 Doorbell Buffer Config: Not Supported 00:15:19.760 Get LBA Status Capability: Not Supported 00:15:19.760 Command & Feature Lockdown Capability: Not Supported 00:15:19.760 Abort Command Limit: 4 00:15:19.760 Async Event Request Limit: 4 00:15:19.760 Number of Firmware Slots: N/A 00:15:19.760 Firmware Slot 1 Read-Only: N/A 00:15:19.760 Firmware Activation Without Reset: N/A 00:15:19.760 Multiple Update Detection Support: N/A 00:15:19.760 Firmware Update Granularity: No Information Provided 00:15:19.760 Per-Namespace SMART Log: Yes 00:15:19.760 Asymmetric Namespace Access Log Page: Supported 00:15:19.760 ANA Transition Time : 10 sec 00:15:19.760 00:15:19.760 Asymmetric Namespace Access Capabilities 00:15:19.760 ANA Optimized State : Supported 00:15:19.760 ANA Non-Optimized State : Supported 00:15:19.760 ANA Inaccessible State : Supported 00:15:19.760 ANA Persistent Loss State : Supported 00:15:19.760 ANA Change State : Supported 00:15:19.760 ANAGRPID is not changed : No 00:15:19.760 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:15:19.760 00:15:19.760 ANA Group Identifier Maximum : 128 00:15:19.760 Number of ANA Group Identifiers : 128 00:15:19.760 Max Number of Allowed Namespaces : 1024 00:15:19.760 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:15:19.760 Command Effects Log Page: Supported 00:15:19.760 Get Log Page Extended Data: Supported 00:15:19.760 Telemetry Log Pages: Not Supported 00:15:19.760 Persistent Event Log Pages: Not Supported 00:15:19.760 Supported Log Pages Log Page: May Support 00:15:19.760 Commands Supported & Effects Log Page: Not Supported 00:15:19.760 Feature Identifiers & Effects Log Page:May Support 00:15:19.760 NVMe-MI Commands & Effects Log Page: May Support 00:15:19.760 Data Area 4 for Telemetry Log: Not Supported 00:15:19.760 Error Log Page Entries Supported: 128 00:15:19.760 Keep Alive: Supported 00:15:19.760 Keep Alive Granularity: 1000 ms 00:15:19.760 00:15:19.760 NVM Command Set Attributes 00:15:19.760 ========================== 00:15:19.760 Submission Queue Entry Size 00:15:19.760 Max: 64 00:15:19.760 Min: 64 00:15:19.760 Completion Queue Entry Size 00:15:19.760 Max: 16 00:15:19.760 Min: 16 00:15:19.760 Number of Namespaces: 1024 00:15:19.760 Compare Command: Not Supported 00:15:19.760 Write Uncorrectable Command: Not Supported 00:15:19.760 Dataset Management Command: Supported 00:15:19.760 Write Zeroes Command: Supported 00:15:19.760 Set Features Save Field: Not Supported 00:15:19.760 Reservations: Not Supported 00:15:19.760 Timestamp: Not Supported 00:15:19.760 Copy: Not Supported 00:15:19.760 Volatile Write Cache: Present 00:15:19.760 Atomic Write Unit (Normal): 1 00:15:19.760 Atomic Write Unit (PFail): 1 00:15:19.760 Atomic Compare & Write Unit: 1 00:15:19.760 Fused Compare & Write: Not Supported 00:15:19.760 Scatter-Gather List 00:15:19.760 SGL Command Set: Supported 00:15:19.760 SGL Keyed: Not Supported 00:15:19.760 SGL Bit Bucket Descriptor: Not Supported 00:15:19.760 SGL Metadata Pointer: Not Supported 00:15:19.760 Oversized SGL: Not Supported 00:15:19.760 SGL Metadata Address: Not Supported 00:15:19.760 SGL Offset: Supported 00:15:19.760 Transport SGL Data Block: Not Supported 00:15:19.760 Replay Protected Memory Block: Not Supported 00:15:19.760 00:15:19.760 Firmware Slot Information 00:15:19.760 ========================= 00:15:19.760 Active slot: 0 00:15:19.760 00:15:19.760 Asymmetric Namespace Access 00:15:19.760 =========================== 00:15:19.760 Change Count : 0 00:15:19.760 Number of ANA Group Descriptors : 1 00:15:19.760 ANA Group Descriptor : 0 00:15:19.760 ANA Group ID : 1 00:15:19.760 Number of NSID Values : 1 00:15:19.760 Change Count : 0 00:15:19.760 ANA State : 1 00:15:19.760 Namespace Identifier : 1 00:15:19.760 00:15:19.760 Commands Supported and Effects 00:15:19.760 ============================== 00:15:19.760 Admin Commands 00:15:19.760 -------------- 00:15:19.760 Get Log Page (02h): Supported 00:15:19.761 Identify (06h): Supported 00:15:19.761 Abort (08h): Supported 00:15:19.761 Set Features (09h): Supported 00:15:19.761 Get Features (0Ah): Supported 00:15:19.761 Asynchronous Event Request (0Ch): Supported 00:15:19.761 Keep Alive (18h): Supported 00:15:19.761 I/O Commands 00:15:19.761 ------------ 00:15:19.761 Flush (00h): Supported 00:15:19.761 Write (01h): Supported LBA-Change 00:15:19.761 Read (02h): Supported 00:15:19.761 Write Zeroes (08h): Supported LBA-Change 00:15:19.761 Dataset Management (09h): Supported 00:15:19.761 00:15:19.761 Error Log 00:15:19.761 ========= 00:15:19.761 Entry: 0 00:15:19.761 Error Count: 0x3 00:15:19.761 Submission Queue Id: 0x0 00:15:19.761 Command Id: 0x5 00:15:19.761 Phase Bit: 0 00:15:19.761 Status Code: 0x2 00:15:19.761 Status Code Type: 0x0 00:15:19.761 Do Not Retry: 1 00:15:19.761 Error Location: 0x28 00:15:19.761 LBA: 0x0 00:15:19.761 Namespace: 0x0 00:15:19.761 Vendor Log Page: 0x0 00:15:19.761 ----------- 00:15:19.761 Entry: 1 00:15:19.761 Error Count: 0x2 00:15:19.761 Submission Queue Id: 0x0 00:15:19.761 Command Id: 0x5 00:15:19.761 Phase Bit: 0 00:15:19.761 Status Code: 0x2 00:15:19.761 Status Code Type: 0x0 00:15:19.761 Do Not Retry: 1 00:15:19.761 Error Location: 0x28 00:15:19.761 LBA: 0x0 00:15:19.761 Namespace: 0x0 00:15:19.761 Vendor Log Page: 0x0 00:15:19.761 ----------- 00:15:19.761 Entry: 2 00:15:19.761 Error Count: 0x1 00:15:19.761 Submission Queue Id: 0x0 00:15:19.761 Command Id: 0x4 00:15:19.761 Phase Bit: 0 00:15:19.761 Status Code: 0x2 00:15:19.761 Status Code Type: 0x0 00:15:19.761 Do Not Retry: 1 00:15:19.761 Error Location: 0x28 00:15:19.761 LBA: 0x0 00:15:19.761 Namespace: 0x0 00:15:19.761 Vendor Log Page: 0x0 00:15:19.761 00:15:19.761 Number of Queues 00:15:19.761 ================ 00:15:19.761 Number of I/O Submission Queues: 128 00:15:19.761 Number of I/O Completion Queues: 128 00:15:19.761 00:15:19.761 ZNS Specific Controller Data 00:15:19.761 ============================ 00:15:19.761 Zone Append Size Limit: 0 00:15:19.761 00:15:19.761 00:15:19.761 Active Namespaces 00:15:19.761 ================= 00:15:19.761 get_feature(0x05) failed 00:15:19.761 Namespace ID:1 00:15:19.761 Command Set Identifier: NVM (00h) 00:15:19.761 Deallocate: Supported 00:15:19.761 Deallocated/Unwritten Error: Not Supported 00:15:19.761 Deallocated Read Value: Unknown 00:15:19.761 Deallocate in Write Zeroes: Not Supported 00:15:19.761 Deallocated Guard Field: 0xFFFF 00:15:19.761 Flush: Supported 00:15:19.761 Reservation: Not Supported 00:15:19.761 Namespace Sharing Capabilities: Multiple Controllers 00:15:19.761 Size (in LBAs): 1310720 (5GiB) 00:15:19.761 Capacity (in LBAs): 1310720 (5GiB) 00:15:19.761 Utilization (in LBAs): 1310720 (5GiB) 00:15:19.761 UUID: 843a2ff9-ecd1-486d-8c3a-bb462108b141 00:15:19.761 Thin Provisioning: Not Supported 00:15:19.761 Per-NS Atomic Units: Yes 00:15:19.761 Atomic Boundary Size (Normal): 0 00:15:19.761 Atomic Boundary Size (PFail): 0 00:15:19.761 Atomic Boundary Offset: 0 00:15:19.761 NGUID/EUI64 Never Reused: No 00:15:19.761 ANA group ID: 1 00:15:19.761 Namespace Write Protected: No 00:15:19.761 Number of LBA Formats: 1 00:15:19.761 Current LBA Format: LBA Format #00 00:15:19.761 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:15:19.761 00:15:19.761 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:15:19.761 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:19.761 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:15:19.761 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:19.761 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:15:19.761 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.761 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:19.761 rmmod nvme_tcp 00:15:20.021 rmmod nvme_fabrics 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:15:20.021 20:51:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:20.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:20.958 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:20.958 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:21.218 00:15:21.218 real 0m3.331s 00:15:21.218 user 0m1.115s 00:15:21.218 sys 0m1.743s 00:15:21.218 20:51:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.218 20:51:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.218 ************************************ 00:15:21.218 END TEST nvmf_identify_kernel_target 00:15:21.218 ************************************ 00:15:21.218 20:51:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:21.218 20:51:42 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:21.218 20:51:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:21.218 20:51:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.218 20:51:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:21.218 ************************************ 00:15:21.218 START TEST nvmf_auth_host 00:15:21.218 ************************************ 00:15:21.218 20:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:21.218 * Looking for test storage... 00:15:21.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.218 20:51:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:21.219 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:21.479 Cannot find device "nvmf_tgt_br" 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:21.479 Cannot find device "nvmf_tgt_br2" 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:21.479 Cannot find device "nvmf_tgt_br" 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:21.479 Cannot find device "nvmf_tgt_br2" 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:21.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:21.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:21.479 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:21.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:15:21.738 00:15:21.738 --- 10.0.0.2 ping statistics --- 00:15:21.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.738 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:21.738 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:21.738 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:15:21.738 00:15:21.738 --- 10.0.0.3 ping statistics --- 00:15:21.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.738 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:21.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:15:21.738 00:15:21.738 --- 10.0.0.1 ping statistics --- 00:15:21.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.738 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.738 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=77863 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 77863 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 77863 ']' 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.739 20:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5347c5faf0275cdb3df98ce91ee390f3 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NRG 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5347c5faf0275cdb3df98ce91ee390f3 0 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5347c5faf0275cdb3df98ce91ee390f3 0 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5347c5faf0275cdb3df98ce91ee390f3 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:15:22.672 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NRG 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NRG 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.NRG 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b1be3d242c19238f49022fd0117e3b8185b162d24f0f04a277a7143668963fd3 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ein 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b1be3d242c19238f49022fd0117e3b8185b162d24f0f04a277a7143668963fd3 3 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b1be3d242c19238f49022fd0117e3b8185b162d24f0f04a277a7143668963fd3 3 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b1be3d242c19238f49022fd0117e3b8185b162d24f0f04a277a7143668963fd3 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ein 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ein 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ein 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=23ec17ebec8cbd56113f59dd5d7cb4882ddbe5787c611f5a 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fJp 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 23ec17ebec8cbd56113f59dd5d7cb4882ddbe5787c611f5a 0 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 23ec17ebec8cbd56113f59dd5d7cb4882ddbe5787c611f5a 0 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=23ec17ebec8cbd56113f59dd5d7cb4882ddbe5787c611f5a 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fJp 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fJp 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.fJp 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fba693122a2aecc030d4aba87b2cee823de1e82d454f55a4 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.V1R 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fba693122a2aecc030d4aba87b2cee823de1e82d454f55a4 2 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fba693122a2aecc030d4aba87b2cee823de1e82d454f55a4 2 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fba693122a2aecc030d4aba87b2cee823de1e82d454f55a4 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.V1R 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.V1R 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.V1R 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a61e1dc3e683a1c5e93a5db7637a6070 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eyM 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a61e1dc3e683a1c5e93a5db7637a6070 1 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a61e1dc3e683a1c5e93a5db7637a6070 1 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a61e1dc3e683a1c5e93a5db7637a6070 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:15:22.931 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eyM 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eyM 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.eyM 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a23dd0490884013c8de7104232274b61 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.g0H 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a23dd0490884013c8de7104232274b61 1 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a23dd0490884013c8de7104232274b61 1 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a23dd0490884013c8de7104232274b61 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.g0H 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.g0H 00:15:23.189 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.g0H 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=62deb8b09c9b3da664112b12352c7701dc174a10f70c3dcc 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.p1H 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 62deb8b09c9b3da664112b12352c7701dc174a10f70c3dcc 2 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 62deb8b09c9b3da664112b12352c7701dc174a10f70c3dcc 2 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=62deb8b09c9b3da664112b12352c7701dc174a10f70c3dcc 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:15:23.190 20:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.p1H 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.p1H 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.p1H 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=df890b19596f96ddd68522639085f79f 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vI8 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key df890b19596f96ddd68522639085f79f 0 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 df890b19596f96ddd68522639085f79f 0 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=df890b19596f96ddd68522639085f79f 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vI8 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vI8 00:15:23.190 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.vI8 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9c5bd56da72febf2bf5253ae963c718d665eeff6068ae082fe021b6ea238bde7 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tPQ 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9c5bd56da72febf2bf5253ae963c718d665eeff6068ae082fe021b6ea238bde7 3 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9c5bd56da72febf2bf5253ae963c718d665eeff6068ae082fe021b6ea238bde7 3 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9c5bd56da72febf2bf5253ae963c718d665eeff6068ae082fe021b6ea238bde7 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tPQ 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tPQ 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.tPQ 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77863 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 77863 ']' 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.448 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NRG 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ein ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ein 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.fJp 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.V1R ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.V1R 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.eyM 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.g0H ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.g0H 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.p1H 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.vI8 ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.vI8 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.tPQ 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:15:23.706 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:23.707 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:23.707 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:23.707 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:15:23.707 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:15:23.707 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:15:23.707 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:23.707 20:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:24.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:24.273 Waiting for block devices as requested 00:15:24.273 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:24.532 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:25.470 No valid GPT data, bailing 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:25.470 No valid GPT data, bailing 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:25.470 No valid GPT data, bailing 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:25.470 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:25.471 No valid GPT data, bailing 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:25.471 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid=69e37e11-dc2b-47bc-a2e9-49065053d84e -a 10.0.0.1 -t tcp -s 4420 00:15:25.730 00:15:25.730 Discovery Log Number of Records 2, Generation counter 2 00:15:25.730 =====Discovery Log Entry 0====== 00:15:25.730 trtype: tcp 00:15:25.730 adrfam: ipv4 00:15:25.730 subtype: current discovery subsystem 00:15:25.730 treq: not specified, sq flow control disable supported 00:15:25.730 portid: 1 00:15:25.730 trsvcid: 4420 00:15:25.730 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:25.730 traddr: 10.0.0.1 00:15:25.730 eflags: none 00:15:25.730 sectype: none 00:15:25.730 =====Discovery Log Entry 1====== 00:15:25.730 trtype: tcp 00:15:25.730 adrfam: ipv4 00:15:25.730 subtype: nvme subsystem 00:15:25.730 treq: not specified, sq flow control disable supported 00:15:25.730 portid: 1 00:15:25.730 trsvcid: 4420 00:15:25.730 subnqn: nqn.2024-02.io.spdk:cnode0 00:15:25.730 traddr: 10.0.0.1 00:15:25.730 eflags: none 00:15:25.731 sectype: none 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.731 nvme0n1 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:25.731 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.991 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.991 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.991 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:25.991 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.991 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.991 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.991 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:25.991 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.992 nvme0n1 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.992 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.252 nvme0n1 00:15:26.252 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.252 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:26.252 20:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:26.252 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.252 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.252 20:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:15:26.252 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.253 nvme0n1 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.253 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.513 nvme0n1 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.513 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.514 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.773 nvme0n1 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:26.773 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.032 nvme0n1 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.032 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:27.291 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:27.292 20:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.292 nvme0n1 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:27.292 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.551 nvme0n1 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.551 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.810 nvme0n1 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:27.810 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:27.811 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:27.811 20:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:27.811 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:27.811 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.811 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.811 nvme0n1 00:15:27.811 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.811 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:27.811 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.811 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.811 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:27.811 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:28.070 20:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.329 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.589 nvme0n1 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.589 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.847 nvme0n1 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:28.847 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:28.848 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:28.848 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:28.848 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:28.848 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:28.848 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:28.848 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.848 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.848 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.107 nvme0n1 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.107 20:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.366 nvme0n1 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.366 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.624 nvme0n1 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:29.624 20:51:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.007 20:51:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.265 nvme0n1 00:15:31.265 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.265 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:31.265 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.265 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:31.265 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.265 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.265 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.265 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:31.265 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.265 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.521 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.778 nvme0n1 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.778 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.036 nvme0n1 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:32.036 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.037 20:51:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.603 nvme0n1 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:32.603 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:32.604 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:32.604 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.604 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.861 nvme0n1 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:32.861 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.862 20:51:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.430 nvme0n1 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.430 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.998 nvme0n1 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.998 20:51:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.565 nvme0n1 00:15:34.565 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.565 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:34.565 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.565 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:34.565 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.565 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.566 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.132 nvme0n1 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:35.132 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.133 20:51:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.700 nvme0n1 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.700 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.959 nvme0n1 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:35.959 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.960 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.960 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.960 nvme0n1 00:15:35.960 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.960 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:35.960 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.960 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:35.960 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.960 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:36.218 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.219 20:51:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.219 nvme0n1 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.219 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.478 nvme0n1 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:36.478 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.479 nvme0n1 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.479 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.738 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.739 nvme0n1 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.739 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 nvme0n1 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:36.998 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.999 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.258 nvme0n1 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:15:37.258 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.259 20:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.259 nvme0n1 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.259 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.518 nvme0n1 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.518 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.519 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.779 nvme0n1 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.779 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.039 nvme0n1 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:38.039 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.040 20:51:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.300 nvme0n1 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:38.300 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.301 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.559 nvme0n1 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.559 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.818 nvme0n1 00:15:38.818 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.818 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.819 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.078 nvme0n1 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:39.078 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:39.079 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.079 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.079 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.079 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.338 20:52:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.338 20:52:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.338 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.597 nvme0n1 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.597 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.855 nvme0n1 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:39.855 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.113 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:40.113 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:40.113 20:52:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:40.113 20:52:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:40.113 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.114 20:52:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 nvme0n1 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.372 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.631 nvme0n1 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.631 20:52:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.198 nvme0n1 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:41.198 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:41.456 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:41.456 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:41.456 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:41.457 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:41.457 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:41.457 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:41.457 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:41.457 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:41.457 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.457 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.457 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.716 nvme0n1 00:15:41.716 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.716 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:41.716 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.716 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:41.716 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.974 20:52:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.540 nvme0n1 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:42.540 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:42.541 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:42.541 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:42.541 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:42.541 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:42.541 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:42.541 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:42.541 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:42.541 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:42.541 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.541 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.108 nvme0n1 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:43.108 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:43.109 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:43.109 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:43.109 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:43.109 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:43.109 20:52:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:43.109 20:52:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:43.109 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.109 20:52:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.677 nvme0n1 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:15:43.677 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.678 nvme0n1 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.678 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.937 nvme0n1 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:43.937 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.938 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.197 nvme0n1 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.197 20:52:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.197 nvme0n1 00:15:44.197 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.197 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.197 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.197 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:44.197 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.197 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.197 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.197 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.197 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.197 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.457 nvme0n1 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.457 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.716 nvme0n1 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.716 nvme0n1 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.716 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.975 nvme0n1 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:44.975 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.976 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.233 20:52:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.233 nvme0n1 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:45.233 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.234 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.492 nvme0n1 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:45.492 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.493 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.752 nvme0n1 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:45.752 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:45.753 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:45.753 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:45.753 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:45.753 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:45.753 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:45.753 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:45.753 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:45.753 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:45.753 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.753 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.753 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.011 nvme0n1 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.011 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.270 nvme0n1 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.270 20:52:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.270 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.528 nvme0n1 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.528 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.529 nvme0n1 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:46.529 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.787 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.046 nvme0n1 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.046 20:52:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.304 nvme0n1 00:15:47.304 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.304 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:47.304 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:47.304 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.304 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.304 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.304 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.304 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:47.304 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.304 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.563 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.821 nvme0n1 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.821 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.079 nvme0n1 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.079 20:52:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.644 nvme0n1 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM0N2M1ZmFmMDI3NWNkYjNkZjk4Y2U5MWVlMzkwZjPO20bq: 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: ]] 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjFiZTNkMjQyYzE5MjM4ZjQ5MDIyZmQwMTE3ZTNiODE4NWIxNjJkMjRmMGYwNGEyNzdhNzE0MzY2ODk2M2ZkM1UjXCU=: 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.644 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.210 nvme0n1 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:49.210 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.211 20:52:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.776 nvme0n1 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTYxZTFkYzNlNjgzYTFjNWU5M2E1ZGI3NjM3YTYwNzA+CAvB: 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: ]] 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzZGQwNDkwODg0MDEzYzhkZTcxMDQyMzIyNzRiNjHK4w6p: 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.776 20:52:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.342 nvme0n1 00:15:50.342 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.342 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:50.342 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.342 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.342 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.342 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjJkZWI4YjA5YzliM2RhNjY0MTEyYjEyMzUyYzc3MDFkYzE3NGExMGY3MGMzZGNj5V8G5Q==: 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: ]] 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGY4OTBiMTk1OTZmOTZkZGQ2ODUyMjYzOTA4NWY3OWZLFmtc: 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.343 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.909 nvme0n1 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWM1YmQ1NmRhNzJmZWJmMmJmNTI1M2FlOTYzYzcxOGQ2NjVlZWZmNjA2OGFlMDgyZmUwMjFiNmVhMjM4YmRlN0AGpJY=: 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.909 20:52:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.477 nvme0n1 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNlYzE3ZWJlYzhjYmQ1NjExM2Y1OWRkNWQ3Y2I0ODgyZGRiZTU3ODdjNjExZjVhtz/7fw==: 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: ]] 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmJhNjkzMTIyYTJhZWNjMDMwZDRhYmE4N2IyY2VlODIzZGUxZTgyZDQ1NGY1NWE0z4As2g==: 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.477 request: 00:15:51.477 { 00:15:51.477 "name": "nvme0", 00:15:51.477 "trtype": "tcp", 00:15:51.477 "traddr": "10.0.0.1", 00:15:51.477 "adrfam": "ipv4", 00:15:51.477 "trsvcid": "4420", 00:15:51.477 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:15:51.477 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:15:51.477 "prchk_reftag": false, 00:15:51.477 "prchk_guard": false, 00:15:51.477 "hdgst": false, 00:15:51.477 "ddgst": false, 00:15:51.477 "method": "bdev_nvme_attach_controller", 00:15:51.477 "req_id": 1 00:15:51.477 } 00:15:51.477 Got JSON-RPC error response 00:15:51.477 response: 00:15:51.477 { 00:15:51.477 "code": -5, 00:15:51.477 "message": "Input/output error" 00:15:51.477 } 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.477 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.478 request: 00:15:51.478 { 00:15:51.478 "name": "nvme0", 00:15:51.478 "trtype": "tcp", 00:15:51.478 "traddr": "10.0.0.1", 00:15:51.478 "adrfam": "ipv4", 00:15:51.478 "trsvcid": "4420", 00:15:51.478 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:15:51.478 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:15:51.478 "prchk_reftag": false, 00:15:51.478 "prchk_guard": false, 00:15:51.478 "hdgst": false, 00:15:51.478 "ddgst": false, 00:15:51.478 "dhchap_key": "key2", 00:15:51.478 "method": "bdev_nvme_attach_controller", 00:15:51.478 "req_id": 1 00:15:51.478 } 00:15:51.478 Got JSON-RPC error response 00:15:51.478 response: 00:15:51.478 { 00:15:51.478 "code": -5, 00:15:51.478 "message": "Input/output error" 00:15:51.478 } 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.478 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.736 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.736 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:15:51.736 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:15:51.736 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:15:51.736 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:15:51.736 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:15:51.736 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:51.736 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:51.736 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:51.737 request: 00:15:51.737 { 00:15:51.737 "name": "nvme0", 00:15:51.737 "trtype": "tcp", 00:15:51.737 "traddr": "10.0.0.1", 00:15:51.737 "adrfam": "ipv4", 00:15:51.737 "trsvcid": "4420", 00:15:51.737 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:15:51.737 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:15:51.737 "prchk_reftag": false, 00:15:51.737 "prchk_guard": false, 00:15:51.737 "hdgst": false, 00:15:51.737 "ddgst": false, 00:15:51.737 "dhchap_key": "key1", 00:15:51.737 "dhchap_ctrlr_key": "ckey2", 00:15:51.737 "method": "bdev_nvme_attach_controller", 00:15:51.737 "req_id": 1 00:15:51.737 } 00:15:51.737 Got JSON-RPC error response 00:15:51.737 response: 00:15:51.737 { 00:15:51.737 "code": -5, 00:15:51.737 "message": "Input/output error" 00:15:51.737 } 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.737 rmmod nvme_tcp 00:15:51.737 rmmod nvme_fabrics 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 77863 ']' 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 77863 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 77863 ']' 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 77863 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77863 00:15:51.737 killing process with pid 77863 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77863' 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 77863 00:15:51.737 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 77863 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:15:51.995 20:52:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:52.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:52.930 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:52.930 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:53.239 20:52:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.NRG /tmp/spdk.key-null.fJp /tmp/spdk.key-sha256.eyM /tmp/spdk.key-sha384.p1H /tmp/spdk.key-sha512.tPQ /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:15:53.239 20:52:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:53.498 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:53.498 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:53.498 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:53.757 00:15:53.757 real 0m32.495s 00:15:53.757 user 0m29.745s 00:15:53.757 sys 0m4.885s 00:15:53.757 20:52:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.757 20:52:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.757 ************************************ 00:15:53.757 END TEST nvmf_auth_host 00:15:53.757 ************************************ 00:15:53.757 20:52:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:53.757 20:52:15 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:15:53.757 20:52:15 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:53.757 20:52:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.757 20:52:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.757 20:52:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.757 ************************************ 00:15:53.757 START TEST nvmf_digest 00:15:53.757 ************************************ 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:53.757 * Looking for test storage... 00:15:53.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.757 20:52:15 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:54.016 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:54.017 Cannot find device "nvmf_tgt_br" 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.017 Cannot find device "nvmf_tgt_br2" 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:54.017 Cannot find device "nvmf_tgt_br" 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:54.017 Cannot find device "nvmf_tgt_br2" 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.017 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:54.276 20:52:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:54.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:15:54.276 00:15:54.276 --- 10.0.0.2 ping statistics --- 00:15:54.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.276 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:54.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:15:54.276 00:15:54.276 --- 10.0.0.3 ping statistics --- 00:15:54.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.276 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:54.276 00:15:54.276 --- 10.0.0.1 ping statistics --- 00:15:54.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.276 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:15:54.276 ************************************ 00:15:54.276 START TEST nvmf_digest_clean 00:15:54.276 ************************************ 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79417 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79417 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79417 ']' 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.276 20:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:54.276 [2024-07-15 20:52:16.173339] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:15:54.276 [2024-07-15 20:52:16.173397] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.535 [2024-07-15 20:52:16.300895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.535 [2024-07-15 20:52:16.383596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.535 [2024-07-15 20:52:16.383647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.535 [2024-07-15 20:52:16.383656] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.535 [2024-07-15 20:52:16.383665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.535 [2024-07-15 20:52:16.383671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.535 [2024-07-15 20:52:16.383700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:55.471 [2024-07-15 20:52:17.131567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:55.471 null0 00:15:55.471 [2024-07-15 20:52:17.173131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.471 [2024-07-15 20:52:17.197187] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79452 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79452 /var/tmp/bperf.sock 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79452 ']' 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:55.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.471 20:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:55.471 [2024-07-15 20:52:17.248736] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:15:55.471 [2024-07-15 20:52:17.248809] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79452 ] 00:15:55.730 [2024-07-15 20:52:17.380706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.730 [2024-07-15 20:52:17.468304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.299 20:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.299 20:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:15:56.299 20:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:15:56.299 20:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:15:56.299 20:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:56.558 [2024-07-15 20:52:18.299712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:56.558 20:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:56.558 20:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:56.902 nvme0n1 00:15:56.902 20:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:15:56.902 20:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:56.902 Running I/O for 2 seconds... 00:15:58.806 00:15:58.806 Latency(us) 00:15:58.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.806 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:15:58.806 nvme0n1 : 2.01 19615.79 76.62 0.00 0.00 6521.53 6000.89 13686.23 00:15:58.806 =================================================================================================================== 00:15:58.806 Total : 19615.79 76.62 0.00 0.00 6521.53 6000.89 13686.23 00:15:58.806 0 00:15:58.806 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:15:58.806 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:15:58.806 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:58.806 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:58.807 | select(.opcode=="crc32c") 00:15:58.807 | "\(.module_name) \(.executed)"' 00:15:58.807 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79452 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79452 ']' 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79452 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79452 00:15:59.065 killing process with pid 79452 00:15:59.065 Received shutdown signal, test time was about 2.000000 seconds 00:15:59.065 00:15:59.065 Latency(us) 00:15:59.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.065 =================================================================================================================== 00:15:59.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79452' 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79452 00:15:59.065 20:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79452 00:15:59.324 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:15:59.324 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79501 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79501 /var/tmp/bperf.sock 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79501 ']' 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:59.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:59.325 20:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:59.325 [2024-07-15 20:52:21.169341] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:15:59.325 [2024-07-15 20:52:21.169534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79501 ] 00:15:59.325 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:59.325 Zero copy mechanism will not be used. 00:15:59.584 [2024-07-15 20:52:21.309697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.584 [2024-07-15 20:52:21.392726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.151 20:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.151 20:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:00.151 20:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:00.151 20:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:00.151 20:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:00.410 [2024-07-15 20:52:22.224202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:00.410 20:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:00.410 20:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:00.668 nvme0n1 00:16:00.668 20:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:00.668 20:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:00.928 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:00.928 Zero copy mechanism will not be used. 00:16:00.928 Running I/O for 2 seconds... 00:16:02.828 00:16:02.828 Latency(us) 00:16:02.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.828 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:02.828 nvme0n1 : 2.00 8822.76 1102.85 0.00 0.00 1811.08 1737.10 6606.24 00:16:02.828 =================================================================================================================== 00:16:02.828 Total : 8822.76 1102.85 0.00 0.00 1811.08 1737.10 6606.24 00:16:02.828 0 00:16:02.828 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:02.828 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:02.828 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:02.828 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:02.828 | select(.opcode=="crc32c") 00:16:02.828 | "\(.module_name) \(.executed)"' 00:16:02.828 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79501 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79501 ']' 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79501 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79501 00:16:03.087 killing process with pid 79501 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79501' 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79501 00:16:03.087 Received shutdown signal, test time was about 2.000000 seconds 00:16:03.087 00:16:03.087 Latency(us) 00:16:03.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.087 =================================================================================================================== 00:16:03.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:03.087 20:52:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79501 00:16:03.345 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79562 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79562 /var/tmp/bperf.sock 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79562 ']' 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:03.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.346 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:03.346 [2024-07-15 20:52:25.122558] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:16:03.346 [2024-07-15 20:52:25.122759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79562 ] 00:16:03.604 [2024-07-15 20:52:25.263099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.604 [2024-07-15 20:52:25.352948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.171 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.171 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:04.171 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:04.171 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:04.171 20:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:04.429 [2024-07-15 20:52:26.164620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:04.429 20:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:04.429 20:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:04.687 nvme0n1 00:16:04.687 20:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:04.687 20:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:04.687 Running I/O for 2 seconds... 00:16:07.219 00:16:07.219 Latency(us) 00:16:07.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.219 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.219 nvme0n1 : 2.01 21074.45 82.32 0.00 0.00 6068.62 5632.41 13896.79 00:16:07.219 =================================================================================================================== 00:16:07.219 Total : 21074.45 82.32 0.00 0.00 6068.62 5632.41 13896.79 00:16:07.219 0 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:07.219 | select(.opcode=="crc32c") 00:16:07.219 | "\(.module_name) \(.executed)"' 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79562 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79562 ']' 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79562 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79562 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:07.219 killing process with pid 79562 00:16:07.219 Received shutdown signal, test time was about 2.000000 seconds 00:16:07.219 00:16:07.219 Latency(us) 00:16:07.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.219 =================================================================================================================== 00:16:07.219 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79562' 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79562 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79562 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:07.219 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79617 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79617 /var/tmp/bperf.sock 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79617 ']' 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:07.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:07.220 20:52:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:07.220 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:07.220 Zero copy mechanism will not be used. 00:16:07.220 [2024-07-15 20:52:29.032870] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:16:07.220 [2024-07-15 20:52:29.032935] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79617 ] 00:16:07.479 [2024-07-15 20:52:29.172921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.479 [2024-07-15 20:52:29.262213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.149 20:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.149 20:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:08.149 20:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:08.149 20:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:08.149 20:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:08.149 [2024-07-15 20:52:30.057845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:08.408 20:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:08.408 20:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:08.667 nvme0n1 00:16:08.667 20:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:08.667 20:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:08.667 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:08.667 Zero copy mechanism will not be used. 00:16:08.667 Running I/O for 2 seconds... 00:16:10.569 00:16:10.569 Latency(us) 00:16:10.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.569 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:10.569 nvme0n1 : 2.00 8744.74 1093.09 0.00 0.00 1826.29 1348.88 4105.87 00:16:10.569 =================================================================================================================== 00:16:10.569 Total : 8744.74 1093.09 0.00 0.00 1826.29 1348.88 4105.87 00:16:10.569 0 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:10.828 | select(.opcode=="crc32c") 00:16:10.828 | "\(.module_name) \(.executed)"' 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79617 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79617 ']' 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79617 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79617 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:10.828 killing process with pid 79617 00:16:10.828 Received shutdown signal, test time was about 2.000000 seconds 00:16:10.828 00:16:10.828 Latency(us) 00:16:10.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.828 =================================================================================================================== 00:16:10.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79617' 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79617 00:16:10.828 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79617 00:16:11.135 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79417 00:16:11.135 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79417 ']' 00:16:11.135 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79417 00:16:11.135 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:11.135 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.135 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79417 00:16:11.135 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:11.135 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:11.136 killing process with pid 79417 00:16:11.136 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79417' 00:16:11.136 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79417 00:16:11.136 20:52:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79417 00:16:11.412 00:16:11.412 real 0m16.997s 00:16:11.412 user 0m31.306s 00:16:11.412 sys 0m5.057s 00:16:11.412 ************************************ 00:16:11.412 END TEST nvmf_digest_clean 00:16:11.412 ************************************ 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:11.412 ************************************ 00:16:11.412 START TEST nvmf_digest_error 00:16:11.412 ************************************ 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79700 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79700 00:16:11.412 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:11.413 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79700 ']' 00:16:11.413 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.413 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.413 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.413 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.413 20:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:11.413 [2024-07-15 20:52:33.248333] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:16:11.413 [2024-07-15 20:52:33.248394] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.670 [2024-07-15 20:52:33.381779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.670 [2024-07-15 20:52:33.459065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.670 [2024-07-15 20:52:33.459114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.670 [2024-07-15 20:52:33.459123] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.670 [2024-07-15 20:52:33.459132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.670 [2024-07-15 20:52:33.459139] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.670 [2024-07-15 20:52:33.459162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:12.236 [2024-07-15 20:52:34.134481] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.236 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:12.494 [2024-07-15 20:52:34.187073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:12.494 null0 00:16:12.494 [2024-07-15 20:52:34.228981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.494 [2024-07-15 20:52:34.253024] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79732 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79732 /var/tmp/bperf.sock 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79732 ']' 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:12.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:12.494 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.495 20:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:12.495 [2024-07-15 20:52:34.308241] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:16:12.495 [2024-07-15 20:52:34.308436] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79732 ] 00:16:12.752 [2024-07-15 20:52:34.448265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.752 [2024-07-15 20:52:34.542737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.752 [2024-07-15 20:52:34.583345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:13.320 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.320 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:13.320 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:13.320 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:13.578 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:13.578 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.578 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:13.578 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.578 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:13.578 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:13.835 nvme0n1 00:16:13.835 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:13.835 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.835 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:13.835 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.835 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:13.835 20:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:13.836 Running I/O for 2 seconds... 00:16:14.094 [2024-07-15 20:52:35.746736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.746790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.746802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.759904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.759944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.759956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.773033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.773070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.773080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.786206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.786241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.786251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.799324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.799358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.799385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.812472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.812507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.812517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.825565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.825599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.825610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.838667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.838701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.838711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.851755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.851789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.851800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.864865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.864900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.864911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.877943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.877978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.877988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.891030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.891075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.891086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.904100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.904134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.904145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.917221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.917255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.917265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.930307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.930340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.930351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.943395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.943431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.094 [2024-07-15 20:52:35.943441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.094 [2024-07-15 20:52:35.956481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.094 [2024-07-15 20:52:35.956515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.095 [2024-07-15 20:52:35.956526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.095 [2024-07-15 20:52:35.969567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.095 [2024-07-15 20:52:35.969600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.095 [2024-07-15 20:52:35.969610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.095 [2024-07-15 20:52:35.982633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.095 [2024-07-15 20:52:35.982667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.095 [2024-07-15 20:52:35.982678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.095 [2024-07-15 20:52:35.995708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.095 [2024-07-15 20:52:35.995742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.095 [2024-07-15 20:52:35.995752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.354 [2024-07-15 20:52:36.008780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.354 [2024-07-15 20:52:36.008813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.354 [2024-07-15 20:52:36.008824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.354 [2024-07-15 20:52:36.021891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.354 [2024-07-15 20:52:36.021924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.354 [2024-07-15 20:52:36.021935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.354 [2024-07-15 20:52:36.034960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.354 [2024-07-15 20:52:36.034994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.354 [2024-07-15 20:52:36.035004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.354 [2024-07-15 20:52:36.048009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.354 [2024-07-15 20:52:36.048043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.354 [2024-07-15 20:52:36.048069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.354 [2024-07-15 20:52:36.061119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.354 [2024-07-15 20:52:36.061154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.354 [2024-07-15 20:52:36.061174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.354 [2024-07-15 20:52:36.074231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.354 [2024-07-15 20:52:36.074265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.354 [2024-07-15 20:52:36.074276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.354 [2024-07-15 20:52:36.087318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.087351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.087362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.100360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.100398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.100424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.113460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.113491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.113501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.126547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.126576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.126587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.139619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.139649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.139659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.152693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.152723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.152734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.165754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.165784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.165795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.178836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.178867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.178878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.191904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.191934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.191944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.204989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.205019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.205031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.218060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.218091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.218101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.231159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.231200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.231210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.244244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.244277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.244288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.355 [2024-07-15 20:52:36.257322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.355 [2024-07-15 20:52:36.257357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.355 [2024-07-15 20:52:36.257368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.614 [2024-07-15 20:52:36.270403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.614 [2024-07-15 20:52:36.270436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.614 [2024-07-15 20:52:36.270446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.614 [2024-07-15 20:52:36.283493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.614 [2024-07-15 20:52:36.283527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.614 [2024-07-15 20:52:36.283538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.614 [2024-07-15 20:52:36.296557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.614 [2024-07-15 20:52:36.296590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.614 [2024-07-15 20:52:36.296600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.614 [2024-07-15 20:52:36.309642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.614 [2024-07-15 20:52:36.309675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.614 [2024-07-15 20:52:36.309686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.614 [2024-07-15 20:52:36.322707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.614 [2024-07-15 20:52:36.322741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.614 [2024-07-15 20:52:36.322752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.614 [2024-07-15 20:52:36.335807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.614 [2024-07-15 20:52:36.335841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.614 [2024-07-15 20:52:36.335852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.614 [2024-07-15 20:52:36.348887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.614 [2024-07-15 20:52:36.348922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.614 [2024-07-15 20:52:36.348933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.614 [2024-07-15 20:52:36.361964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.614 [2024-07-15 20:52:36.361999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.614 [2024-07-15 20:52:36.362009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.614 [2024-07-15 20:52:36.375039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.614 [2024-07-15 20:52:36.375074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.614 [2024-07-15 20:52:36.375084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.614 [2024-07-15 20:52:36.388109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.614 [2024-07-15 20:52:36.388143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.614 [2024-07-15 20:52:36.388153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.614 [2024-07-15 20:52:36.401185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.615 [2024-07-15 20:52:36.401218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.615 [2024-07-15 20:52:36.401228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.615 [2024-07-15 20:52:36.414266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.615 [2024-07-15 20:52:36.414299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.615 [2024-07-15 20:52:36.414309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.615 [2024-07-15 20:52:36.427331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.615 [2024-07-15 20:52:36.427363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.615 [2024-07-15 20:52:36.427374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.615 [2024-07-15 20:52:36.440394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.615 [2024-07-15 20:52:36.440427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.615 [2024-07-15 20:52:36.440438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.615 [2024-07-15 20:52:36.453463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.615 [2024-07-15 20:52:36.453495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.615 [2024-07-15 20:52:36.453506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.615 [2024-07-15 20:52:36.466526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.615 [2024-07-15 20:52:36.466558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.615 [2024-07-15 20:52:36.466569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.615 [2024-07-15 20:52:36.479587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.615 [2024-07-15 20:52:36.479620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.615 [2024-07-15 20:52:36.479630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.615 [2024-07-15 20:52:36.492636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.615 [2024-07-15 20:52:36.492669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.615 [2024-07-15 20:52:36.492679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.615 [2024-07-15 20:52:36.505755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.615 [2024-07-15 20:52:36.505790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.615 [2024-07-15 20:52:36.505801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.615 [2024-07-15 20:52:36.518822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.615 [2024-07-15 20:52:36.518856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.615 [2024-07-15 20:52:36.518866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.531883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.531916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.531926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.544965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.545000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.545011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.558077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.558111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.558122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.576875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.576908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.576919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.589976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.590012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.590023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.603081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.603116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.603127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.616355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.616391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.616402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.629506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.629538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.629549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.642603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.642634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.642645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.655714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.655746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.655757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.668801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.668834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.668845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.681903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.681937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.681948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.695013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.695050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.695061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.708108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.708142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.708153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.721227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.721259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.721269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.734350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.734382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.734393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.747634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.747667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.747678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.760743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.760776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.760787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.773769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.773801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.773811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.885 [2024-07-15 20:52:36.786934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:14.885 [2024-07-15 20:52:36.786966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.885 [2024-07-15 20:52:36.786977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.800054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.800086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.800097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.813065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.813099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.813109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.826099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.826132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.826143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.839226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.839257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.839268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.852303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.852336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.852347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.865412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.865445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.865455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.878516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.878546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.878558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.891626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.891658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.891668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.904712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.904744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.904755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.917878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.917910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.917921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.930979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.931012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.931022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.944074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.944106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.944117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.957131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.957178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.957190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.970222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.970252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.970263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.983287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.983318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.983328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:36.996351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:36.996381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:36.996391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:37.009431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:37.009463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:37.009473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.143 [2024-07-15 20:52:37.022511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.143 [2024-07-15 20:52:37.022543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.143 [2024-07-15 20:52:37.022554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.144 [2024-07-15 20:52:37.035595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.144 [2024-07-15 20:52:37.035625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.144 [2024-07-15 20:52:37.035635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.144 [2024-07-15 20:52:37.048683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.144 [2024-07-15 20:52:37.048714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.144 [2024-07-15 20:52:37.048724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.061802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.061834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.061845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.074957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.074990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.075000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.088048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.088081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.088091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.101184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.101223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.101234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.114250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.114280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.114290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.127329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.127360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.127370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.140385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.140416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.140426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.153460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.153491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.153501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.166562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.166592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.166603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.179639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.179669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.179680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.192704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.192734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.192745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.205793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.205824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.205835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.218857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.218888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.218898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.231926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.231958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.231969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.244997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.245029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.245040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.258067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.258098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.258108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.271116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.271148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.271159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.284205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.284237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.284247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.401 [2024-07-15 20:52:37.297268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.401 [2024-07-15 20:52:37.297297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.401 [2024-07-15 20:52:37.297308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.310341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.310372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.310382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.323404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.323430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.323440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.336472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.336503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.336514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.349529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.349559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.349570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.362612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.362642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.362652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.375706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.375739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.375749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.388778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.388809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.388820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.401836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.401867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.401878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.420647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.420679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.420689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.433704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.433737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.433748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.446824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.446857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.446868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.459904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.459937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.459947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.472983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.473014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.473025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.486064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.486095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.486106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.499129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.499161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.659 [2024-07-15 20:52:37.499185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.659 [2024-07-15 20:52:37.512221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.659 [2024-07-15 20:52:37.512251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.660 [2024-07-15 20:52:37.512262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.660 [2024-07-15 20:52:37.525328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.660 [2024-07-15 20:52:37.525360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.660 [2024-07-15 20:52:37.525370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.660 [2024-07-15 20:52:37.538449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.660 [2024-07-15 20:52:37.538480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.660 [2024-07-15 20:52:37.538491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.660 [2024-07-15 20:52:37.551547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.660 [2024-07-15 20:52:37.551579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.660 [2024-07-15 20:52:37.551590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.660 [2024-07-15 20:52:37.564647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.660 [2024-07-15 20:52:37.564679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.660 [2024-07-15 20:52:37.564689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.577776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.577812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.577822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.590914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.590948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.590958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.603992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.604024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.604035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.617061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.617092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.617102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.630148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.630186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.630197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.643223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.643254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.643264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.656295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.656327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.656337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.669389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.669421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.669431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.682475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.682507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.682518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.695560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.695590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.695601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.708647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.708679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.708690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 [2024-07-15 20:52:37.721722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22fc020) 00:16:15.918 [2024-07-15 20:52:37.721755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.918 [2024-07-15 20:52:37.721766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.918 00:16:15.918 Latency(us) 00:16:15.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.918 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:15.918 nvme0n1 : 2.00 19308.92 75.43 0.00 0.00 6624.28 6132.49 25372.17 00:16:15.918 =================================================================================================================== 00:16:15.918 Total : 19308.92 75.43 0.00 0.00 6624.28 6132.49 25372.17 00:16:15.918 0 00:16:15.918 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:15.918 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:15.918 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:15.918 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:15.918 | .driver_specific 00:16:15.918 | .nvme_error 00:16:15.918 | .status_code 00:16:15.918 | .command_transient_transport_error' 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79732 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79732 ']' 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79732 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79732 00:16:16.179 killing process with pid 79732 00:16:16.179 Received shutdown signal, test time was about 2.000000 seconds 00:16:16.179 00:16:16.179 Latency(us) 00:16:16.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.179 =================================================================================================================== 00:16:16.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79732' 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79732 00:16:16.179 20:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79732 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79787 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79787 /var/tmp/bperf.sock 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79787 ']' 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:16.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.437 20:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:16.437 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:16.437 Zero copy mechanism will not be used. 00:16:16.437 [2024-07-15 20:52:38.213109] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:16:16.437 [2024-07-15 20:52:38.213183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79787 ] 00:16:16.437 [2024-07-15 20:52:38.341602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.695 [2024-07-15 20:52:38.424860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.695 [2024-07-15 20:52:38.466518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:17.261 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.261 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:17.261 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:17.261 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:17.520 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:17.520 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.520 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:17.520 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.520 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:17.520 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:17.779 nvme0n1 00:16:17.779 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:17.779 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.779 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:17.779 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.779 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:17.779 20:52:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:17.779 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:17.779 Zero copy mechanism will not be used. 00:16:17.779 Running I/O for 2 seconds... 00:16:17.779 [2024-07-15 20:52:39.610887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.610932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.610945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.614666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.614699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.614710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.618397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.618429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.618440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.622090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.622119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.622129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.625827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.625856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.625867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.629553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.629582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.629593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.633276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.633303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.633313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.636998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.637025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.637036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.640678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.640706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.640717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.644395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.644422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.644433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.648123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.648153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.648174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.651883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.651911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.651921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.655582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.655610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.655621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.659335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.659363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.659374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.663059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.663088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.663099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.666777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.666806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.666816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.670500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.670525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.670535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.674219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.674246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.674257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.677946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.677973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.677983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.681651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.681678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.681689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.779 [2024-07-15 20:52:39.685394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:17.779 [2024-07-15 20:52:39.685423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.779 [2024-07-15 20:52:39.685433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.038 [2024-07-15 20:52:39.689144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.038 [2024-07-15 20:52:39.689188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.038 [2024-07-15 20:52:39.689200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.038 [2024-07-15 20:52:39.692856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.038 [2024-07-15 20:52:39.692883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.038 [2024-07-15 20:52:39.692894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.038 [2024-07-15 20:52:39.696528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.038 [2024-07-15 20:52:39.696556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.038 [2024-07-15 20:52:39.696566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.038 [2024-07-15 20:52:39.700276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.038 [2024-07-15 20:52:39.700302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.038 [2024-07-15 20:52:39.700313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.038 [2024-07-15 20:52:39.703982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.038 [2024-07-15 20:52:39.704010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.038 [2024-07-15 20:52:39.704020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.038 [2024-07-15 20:52:39.707696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.038 [2024-07-15 20:52:39.707726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.038 [2024-07-15 20:52:39.707736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.038 [2024-07-15 20:52:39.711406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.038 [2024-07-15 20:52:39.711434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.038 [2024-07-15 20:52:39.711445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.038 [2024-07-15 20:52:39.715088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.038 [2024-07-15 20:52:39.715116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.038 [2024-07-15 20:52:39.715127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.038 [2024-07-15 20:52:39.718805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.038 [2024-07-15 20:52:39.718833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.038 [2024-07-15 20:52:39.718843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.038 [2024-07-15 20:52:39.722496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.038 [2024-07-15 20:52:39.722523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.038 [2024-07-15 20:52:39.722534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.038 [2024-07-15 20:52:39.726309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.726336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.726346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.730056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.730082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.730093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.733785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.733812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.733823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.737493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.737520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.737530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.741234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.741259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.741270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.744922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.744949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.744959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.748656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.748683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.748693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.752366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.752393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.752403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.756092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.756120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.756130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.759830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.759859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.759870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.763587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.763615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.763625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.767292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.767319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.767329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.770984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.771012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.771023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.774741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.774770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.774780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.778472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.778500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.778511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.782181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.782207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.782217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.785846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.785873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.785883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.789548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.789575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.789585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.793286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.793312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.793322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.796998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.797025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.797035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.800707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.800735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.800745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.804369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.804396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.804406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.808080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.808109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.808120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.811856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.811885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.811895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.815570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.815599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.815609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.819269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.819296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.819307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.822963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.822992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.823002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.826604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.826633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.826643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.830299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.039 [2024-07-15 20:52:39.830325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.039 [2024-07-15 20:52:39.830335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.039 [2024-07-15 20:52:39.833990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.834018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.834035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.837710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.837738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.837748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.841406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.841433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.841443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.845089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.845117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.845128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.848800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.848827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.848838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.852536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.852563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.852574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.856262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.856288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.856299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.859993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.860022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.860032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.863689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.863718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.863729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.867419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.867450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.867460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.871152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.871189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.871200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.874880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.874909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.874919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.878585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.878616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.878626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.882314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.882341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.882351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.885999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.886034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.886044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.889738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.889765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.889775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.893443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.893470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.893481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.897140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.897177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.897188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.900869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.900897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.900907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.904523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.904550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.904560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.908223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.908251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.908262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.911940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.911969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.911979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.915648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.915677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.915687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.919359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.919386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.919396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.923097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.923124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.923135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.926847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.926876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.926886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.930605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.930632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.930642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.934304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.934330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.040 [2024-07-15 20:52:39.934341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.040 [2024-07-15 20:52:39.938044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.040 [2024-07-15 20:52:39.938070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.041 [2024-07-15 20:52:39.938080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.041 [2024-07-15 20:52:39.941812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.041 [2024-07-15 20:52:39.941839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.041 [2024-07-15 20:52:39.941849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.041 [2024-07-15 20:52:39.945544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.041 [2024-07-15 20:52:39.945570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.041 [2024-07-15 20:52:39.945580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.949249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.949274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.949285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.952959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.952986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.952996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.956679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.956706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.956716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.960425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.960452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.960462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.964160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.964199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.964209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.967889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.967918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.967929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.971655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.971684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.971695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.975380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.975408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.975419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.979102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.979131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.979141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.982847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.982876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.982886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.986548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.986576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.986586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.990294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.990320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.990330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.993988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.994015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.994049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:39.997693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:39.997720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:39.997731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.001426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.001453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.001463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.005350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.005388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.005415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.009096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.009125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.009135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.012832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.012861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.012872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.016551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.016579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.016589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.020282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.020305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.020316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.024030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.024057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.024067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.027789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.027816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.027827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.031525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.031553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.031563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.035248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.035275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.035285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.038922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.038953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.038963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.042634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.042663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.042673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.046320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.046349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.305 [2024-07-15 20:52:40.046359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.305 [2024-07-15 20:52:40.050052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.305 [2024-07-15 20:52:40.050079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.050090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.053815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.053844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.053854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.057549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.057576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.057586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.061301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.061325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.061335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.065041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.065069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.065079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.068786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.068813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.068823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.072529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.072557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.072567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.076275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.076301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.076312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.080003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.080031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.080041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.083768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.083803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.083813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.087495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.087522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.087532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.091209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.091236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.091246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.094920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.094949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.094960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.098595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.098625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.098635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.102340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.102367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.102377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.106077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.106104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.106114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.109794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.109822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.109832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.113508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.113536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.113546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.117205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.117231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.117241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.120921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.120949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.120959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.124618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.124645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.124656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.128316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.128342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.128353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.132056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.132083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.132093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.135775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.135803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.135813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.139471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.139499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.139509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.143205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.143231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.143241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.146894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.146923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.146933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.150580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.150610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.150620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.154343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.154371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.306 [2024-07-15 20:52:40.154381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.306 [2024-07-15 20:52:40.158049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.306 [2024-07-15 20:52:40.158075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.158086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.161785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.161813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.161824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.165520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.165548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.165558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.169263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.169288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.169299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.172964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.172991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.173001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.176674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.176702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.176712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.180342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.180377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.180388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.184059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.184087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.184097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.187805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.187833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.187844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.191560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.191587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.191597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.195304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.195330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.195340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.198996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.199025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.199035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.202685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.202714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.202724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.307 [2024-07-15 20:52:40.206319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.307 [2024-07-15 20:52:40.206346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.307 [2024-07-15 20:52:40.206356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.598 [2024-07-15 20:52:40.210065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.598 [2024-07-15 20:52:40.210094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.598 [2024-07-15 20:52:40.210104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.213900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.213930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.213940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.217621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.217649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.217659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.221322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.221349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.221359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.225034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.225062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.225072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.228758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.228787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.228798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.232465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.232493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.232503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.236210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.236236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.236246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.239926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.239953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.239963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.243666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.243694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.243705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.247409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.247436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.247446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.251095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.251123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.251133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.254808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.254837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.254848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.258573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.258601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.258612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.262286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.262312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.262323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.265994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.266022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.266042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.269702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.269730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.269741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.273398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.273426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.273436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.277103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.277131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.277142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.280853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.280881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.280891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.284619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.284646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.284657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.288346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.288373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.288384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.292059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.292088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.292098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.295788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.295815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.295825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.299507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.299536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.299546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.303240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.303266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.303277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.306986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.307015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.307025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.310686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.310716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.310726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.599 [2024-07-15 20:52:40.314407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.599 [2024-07-15 20:52:40.314435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.599 [2024-07-15 20:52:40.314445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.318117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.318144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.318154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.321868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.321896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.321907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.325607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.325635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.325646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.329351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.329376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.329387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.333064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.333091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.333102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.336782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.336809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.336819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.340502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.340529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.340539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.344202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.344228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.344238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.347942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.347968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.347978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.351671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.351698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.351707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.355398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.355425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.355435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.359103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.359132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.359142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.362841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.362870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.362880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.366522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.366550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.366561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.370228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.370253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.370263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.373962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.373990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.374000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.377633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.377661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.377671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.381386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.381412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.381423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.385118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.385145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.385156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.388826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.388853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.388863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.392514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.392541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.392552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.396243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.396269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.396279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.399922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.399950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.399960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.403674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.403701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.403710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.407380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.407417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.407428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.411117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.411146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.411156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.414838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.414866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.414876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.418539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.418567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.418578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.600 [2024-07-15 20:52:40.422288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.600 [2024-07-15 20:52:40.422315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.600 [2024-07-15 20:52:40.422325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.426011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.426044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.426055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.429708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.429735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.429745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.433367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.433394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.433404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.437134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.437161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.437182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.440848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.440876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.440886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.444531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.444561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.444571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.448196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.448222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.448233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.451873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.451900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.451911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.455637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.455664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.455674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.459325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.459351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.459361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.463037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.463066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.463076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.466768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.466796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.466806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.470509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.470536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.470547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.474219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.474242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.474252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.477870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.477897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.477907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.481594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.481623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.481633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.485280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.485306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.485316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.488943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.488971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.488981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.492772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.492800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.492810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.496546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.496573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.496583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.500259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.500284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.500295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.601 [2024-07-15 20:52:40.503950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.601 [2024-07-15 20:52:40.503977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.601 [2024-07-15 20:52:40.503988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.507667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.507694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.507704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.511391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.511418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.511428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.515108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.515137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.515148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.518809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.518836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.518847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.522555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.522583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.522593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.526277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.526303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.526313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.529972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.529999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.530009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.533708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.533735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.533745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.537436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.537463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.537473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.541182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.541207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.541218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.544872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.544901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.544911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.548536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.548563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.548573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.552215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.552240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.552251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.555952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.555980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.555990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.559700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.559727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.559753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.563423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.563450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.563460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.567136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.567174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.567185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.570827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.570855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.570865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.574548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.574576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.574586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.578247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.578272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.578282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.581928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.581955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.581966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.585685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.585711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.585721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.589405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.862 [2024-07-15 20:52:40.589432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.862 [2024-07-15 20:52:40.589442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.862 [2024-07-15 20:52:40.593082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.593109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.593119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.596784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.596813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.596823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.600461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.600488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.600499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.604194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.604219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.604229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.607895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.607922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.607933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.611594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.611622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.611631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.615285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.615312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.615322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.618984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.619013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.619024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.622734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.622762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.622772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.626440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.626468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.626478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.630153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.630188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.630198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.633861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.633888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.633898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.637562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.637589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.637599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.641267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.641293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.641303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.644947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.644974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.644984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.648661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.648688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.648698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.652337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.652363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.652373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.656050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.656079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.656089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.659829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.659857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.659867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.663544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.663572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.663582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.667299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.667326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.667336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.671014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.671043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.671053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.674710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.674738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.674749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.678431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.678461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.678472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.682180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.682207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.682217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.685856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.685885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.685895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.689581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.689610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.689620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.693288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.693314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.693324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.696986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.863 [2024-07-15 20:52:40.697015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.863 [2024-07-15 20:52:40.697025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.863 [2024-07-15 20:52:40.700703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.700730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.700740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.704414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.704441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.704452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.708136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.708162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.708190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.711786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.711814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.711824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.715523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.715550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.715560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.719260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.719286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.719296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.722978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.723007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.723018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.726718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.726746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.726756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.730425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.730453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.730463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.734133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.734160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.734182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.737836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.737863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.737873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.741514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.741542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.741552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.745247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.745273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.745284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.748962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.748989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.749000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.752672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.752700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.752711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.756375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.756402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.756413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.760089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.760116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.760126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.763817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.763846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.763856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:18.864 [2024-07-15 20:52:40.767561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:18.864 [2024-07-15 20:52:40.767589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.864 [2024-07-15 20:52:40.767599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.771273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.771298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.771309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.774973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.775002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.775012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.778684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.778712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.778722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.782351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.782379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.782390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.786070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.786096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.786107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.789795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.789824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.789834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.793519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.793548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.793559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.797229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.797255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.797265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.800975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.801003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.801013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.804709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.804737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.804747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.808440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.808468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.808478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.812145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.812183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.812194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.815824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.815852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.815862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.819459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.819487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.819497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.823123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.823152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.823162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.826863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.826890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.826900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.830633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.830660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.830671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.834341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.834368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.834379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.838050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.838075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.838086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.841804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.841832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.124 [2024-07-15 20:52:40.841842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.124 [2024-07-15 20:52:40.845502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.124 [2024-07-15 20:52:40.845528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.845538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.849234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.849263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.849276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.852972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.853000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.853011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.856647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.856676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.856686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.860359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.860385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.860395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.864090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.864117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.864127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.867787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.867814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.867824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.871493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.871520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.871530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.875235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.875261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.875272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.878966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.878994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.879005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.882715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.882744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.882755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.886385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.886414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.886424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.890092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.890118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.890128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.893809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.893835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.893845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.897494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.897521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.897531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.901214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.901239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.901249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.904878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.904906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.904916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.908625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.908653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.908664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.912361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.912388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.912398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.916092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.916119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.916130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.919799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.919827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.919837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.923509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.923535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.923546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.927216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.927243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.927253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.930931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.930959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.930970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.934620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.934649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.934659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.938358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.938385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.938395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.942067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.942094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.942104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.945799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.945826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.945836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.949520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.125 [2024-07-15 20:52:40.949547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.125 [2024-07-15 20:52:40.949557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.125 [2024-07-15 20:52:40.953198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.953224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.953234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.956901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.956928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.956938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.960633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.960661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.960671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.964365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.964392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.964403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.968121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.968148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.968158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.971873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.971902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.971912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.975611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.975639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.975650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.979302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.979329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.979339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.983013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.983041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.983052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.986694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.986722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.986733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.990394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.990422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.990433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.994124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.994150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.994160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:40.997829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:40.997856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:40.997867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:41.001548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:41.001574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:41.001584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:41.005300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:41.005325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:41.005336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:41.009048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:41.009076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:41.009086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:41.012854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:41.012884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:41.012894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:41.016556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:41.016584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:41.016594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:41.020236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:41.020262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:41.020272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:41.023950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:41.023977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:41.023987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:41.027609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:41.027636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:41.027646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.126 [2024-07-15 20:52:41.031324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.126 [2024-07-15 20:52:41.031360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.126 [2024-07-15 20:52:41.031370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.035081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.035110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.035121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.038824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.038852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.038863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.042568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.042594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.042605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.046314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.046341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.046352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.050013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.050050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.050061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.053740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.053769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.053779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.057444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.057471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.057481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.061145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.061181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.061192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.064841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.064868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.064879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.068540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.068567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.068578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.072269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.072294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.072304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.075965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.075991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.076001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.079706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.079734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.079744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.083426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.083454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.083464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.087114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.087143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.087153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.090814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.090842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.090852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.094503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.094531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.094542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.098230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.098254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.098265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.101972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.101999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.102009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.105691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.105721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.105731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.109375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.109402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.109412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.113076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.113104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.387 [2024-07-15 20:52:41.113114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.387 [2024-07-15 20:52:41.116821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.387 [2024-07-15 20:52:41.116848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.116859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.120543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.120569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.120580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.124306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.124330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.124340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.128056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.128084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.128094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.131812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.131839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.131849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.135571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.135598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.135608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.139322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.139347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.139358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.143066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.143094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.143105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.146800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.146827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.146838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.150461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.150489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.150500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.154204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.154229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.154240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.157879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.157905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.157915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.161622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.161649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.161660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.165340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.165366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.165376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.169008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.169035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.169045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.172678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.172706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.172715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.176420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.176446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.176456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.180182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.180208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.180218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.183891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.183919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.183929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.187582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.187609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.187619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.191276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.191302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.191312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.195024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.195052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.195062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.198748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.198776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.198786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.202466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.202494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.202504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.206162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.206195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.206206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.209856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.209883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.209894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.213547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.213574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.213583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.217252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.217276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.217286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.220896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.220925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.388 [2024-07-15 20:52:41.220935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.388 [2024-07-15 20:52:41.224643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.388 [2024-07-15 20:52:41.224670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.224680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.228373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.228399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.228409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.232092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.232121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.232131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.235820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.235849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.235859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.239506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.239534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.239544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.243246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.243267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.243277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.246932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.246960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.246971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.250643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.250670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.250681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.254399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.254427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.254437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.258099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.258125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.258135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.261801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.261829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.261840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.265526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.265553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.265564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.269246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.269271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.269281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.272983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.273010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.273020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.276686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.276714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.276724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.280401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.280428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.280438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.284104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.284133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.284143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.287806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.287835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.287845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.389 [2024-07-15 20:52:41.291514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.389 [2024-07-15 20:52:41.291542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.389 [2024-07-15 20:52:41.291552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.295224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.295249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.295260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.298951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.298980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.298990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.302686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.302714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.302725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.306411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.306438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.306449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.310109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.310135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.310145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.313860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.313887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.313897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.317623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.317649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.317659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.321329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.321372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.321383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.325063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.325091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.325101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.328788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.328815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.328825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.332507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.332535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.332545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.336244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.336271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.336281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.339969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.339998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.340008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.343707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.650 [2024-07-15 20:52:41.343736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.650 [2024-07-15 20:52:41.343747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.650 [2024-07-15 20:52:41.347413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.347442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.347452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.351136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.351175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.351186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.354894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.354922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.354932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.358610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.358638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.358648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.362298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.362324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.362334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.365995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.366022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.366041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.369743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.369770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.369780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.373432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.373460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.373470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.377125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.377152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.377162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.380840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.380867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.380878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.384509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.384537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.384547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.388201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.388229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.388239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.391949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.391977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.391988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.395734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.395762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.395773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.399424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.399452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.399462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.403126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.403155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.403174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.406844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.406872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.406883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.410564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.410592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.410602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.414322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.414349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.414359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.418058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.418084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.418094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.421793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.421820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.421831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.425470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.425514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.425525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.429207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.429233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.429244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.432915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.432943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.432953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.436624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.436652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.436662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.440334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.440360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.440371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.444076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.444105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.444115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.447818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.447846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.447857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.451542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.651 [2024-07-15 20:52:41.451571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.651 [2024-07-15 20:52:41.451581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.651 [2024-07-15 20:52:41.455267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.455294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.455304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.458954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.458982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.458992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.462658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.462686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.462696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.466356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.466382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.466392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.470063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.470088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.470098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.473800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.473827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.473837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.477535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.477562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.477572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.481271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.481296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.481307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.484968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.484995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.485005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.488675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.488702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.488712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.492395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.492422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.492432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.496162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.496199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.496210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.499838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.499867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.499877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.503549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.503578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.503588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.507280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.507306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.507317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.510977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.511005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.511015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.514713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.514743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.514753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.518391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.518419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.518429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.522123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.522150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.522160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.525891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.525917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.525927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.529614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.529642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.529652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.533341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.533367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.533377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.537050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.537078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.537088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.540787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.540815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.540825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.544523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.544551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.544561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.548252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.548280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.548290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.551922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.551952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.551962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.652 [2024-07-15 20:52:41.555662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.652 [2024-07-15 20:52:41.555690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.652 [2024-07-15 20:52:41.555700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.912 [2024-07-15 20:52:41.559359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.912 [2024-07-15 20:52:41.559387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.912 [2024-07-15 20:52:41.559398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.912 [2024-07-15 20:52:41.563051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.912 [2024-07-15 20:52:41.563080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.912 [2024-07-15 20:52:41.563090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.912 [2024-07-15 20:52:41.566812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.912 [2024-07-15 20:52:41.566840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.912 [2024-07-15 20:52:41.566850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.912 [2024-07-15 20:52:41.570525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.912 [2024-07-15 20:52:41.570553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.912 [2024-07-15 20:52:41.570563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.912 [2024-07-15 20:52:41.574235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.912 [2024-07-15 20:52:41.574261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.912 [2024-07-15 20:52:41.574271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.912 [2024-07-15 20:52:41.577954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.912 [2024-07-15 20:52:41.577981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.912 [2024-07-15 20:52:41.577991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.912 [2024-07-15 20:52:41.581664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.912 [2024-07-15 20:52:41.581691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.912 [2024-07-15 20:52:41.581701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.912 [2024-07-15 20:52:41.585373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.912 [2024-07-15 20:52:41.585401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.912 [2024-07-15 20:52:41.585412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.912 [2024-07-15 20:52:41.589053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.912 [2024-07-15 20:52:41.589080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.912 [2024-07-15 20:52:41.589091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:19.912 [2024-07-15 20:52:41.592784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.912 [2024-07-15 20:52:41.592813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.912 [2024-07-15 20:52:41.592824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:19.912 [2024-07-15 20:52:41.596551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba4ac0) 00:16:19.912 [2024-07-15 20:52:41.596578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.912 [2024-07-15 20:52:41.596588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:19.912 00:16:19.912 Latency(us) 00:16:19.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.912 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:19.912 nvme0n1 : 2.00 8299.30 1037.41 0.00 0.00 1925.36 1763.42 7737.99 00:16:19.912 =================================================================================================================== 00:16:19.912 Total : 8299.30 1037.41 0.00 0.00 1925.36 1763.42 7737.99 00:16:19.912 0 00:16:19.912 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:19.912 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:19.912 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:19.912 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:19.912 | .driver_specific 00:16:19.912 | .nvme_error 00:16:19.912 | .status_code 00:16:19.912 | .command_transient_transport_error' 00:16:20.171 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 535 > 0 )) 00:16:20.171 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79787 00:16:20.171 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79787 ']' 00:16:20.171 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79787 00:16:20.171 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:16:20.171 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.171 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79787 00:16:20.172 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:20.172 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:20.172 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79787' 00:16:20.172 killing process with pid 79787 00:16:20.172 Received shutdown signal, test time was about 2.000000 seconds 00:16:20.172 00:16:20.172 Latency(us) 00:16:20.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.172 =================================================================================================================== 00:16:20.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:20.172 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79787 00:16:20.172 20:52:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79787 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79847 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79847 /var/tmp/bperf.sock 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79847 ']' 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:20.172 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:20.172 [2024-07-15 20:52:42.079705] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:16:20.172 [2024-07-15 20:52:42.079765] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79847 ] 00:16:20.456 [2024-07-15 20:52:42.206313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.456 [2024-07-15 20:52:42.283436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.456 [2024-07-15 20:52:42.324080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:21.034 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.034 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:21.034 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:21.034 20:52:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:21.293 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:21.293 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.293 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:21.293 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.293 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:21.293 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:21.552 nvme0n1 00:16:21.552 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:21.552 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.552 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:21.552 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.552 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:21.552 20:52:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:21.552 Running I/O for 2 seconds... 00:16:21.552 [2024-07-15 20:52:43.448068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fef90 00:16:21.552 [2024-07-15 20:52:43.450062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.552 [2024-07-15 20:52:43.450097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.552 [2024-07-15 20:52:43.460301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190feb58 00:16:21.811 [2024-07-15 20:52:43.462243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.462272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.472494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fe2e8 00:16:21.811 [2024-07-15 20:52:43.474411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.474438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.484676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fda78 00:16:21.811 [2024-07-15 20:52:43.486585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.486611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.496835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fd208 00:16:21.811 [2024-07-15 20:52:43.498727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.498752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.509061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fc998 00:16:21.811 [2024-07-15 20:52:43.510934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.510962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.521255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fc128 00:16:21.811 [2024-07-15 20:52:43.523112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.523140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.533462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fb8b8 00:16:21.811 [2024-07-15 20:52:43.535319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.535344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.545687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fb048 00:16:21.811 [2024-07-15 20:52:43.547600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.547628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.558439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fa7d8 00:16:21.811 [2024-07-15 20:52:43.560251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.560279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.570651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f9f68 00:16:21.811 [2024-07-15 20:52:43.572442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.572469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.582807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f96f8 00:16:21.811 [2024-07-15 20:52:43.584583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.584608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.594985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f8e88 00:16:21.811 [2024-07-15 20:52:43.596747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.596773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.607150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f8618 00:16:21.811 [2024-07-15 20:52:43.608903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.608928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.619342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f7da8 00:16:21.811 [2024-07-15 20:52:43.621068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.621095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.631532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f7538 00:16:21.811 [2024-07-15 20:52:43.633246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.633271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:21.811 [2024-07-15 20:52:43.643678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f6cc8 00:16:21.811 [2024-07-15 20:52:43.645382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.811 [2024-07-15 20:52:43.645407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:21.812 [2024-07-15 20:52:43.655852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f6458 00:16:21.812 [2024-07-15 20:52:43.657539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.812 [2024-07-15 20:52:43.657565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:21.812 [2024-07-15 20:52:43.668039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f5be8 00:16:21.812 [2024-07-15 20:52:43.669714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.812 [2024-07-15 20:52:43.669740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:21.812 [2024-07-15 20:52:43.680204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f5378 00:16:21.812 [2024-07-15 20:52:43.681853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.812 [2024-07-15 20:52:43.681878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:21.812 [2024-07-15 20:52:43.692407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f4b08 00:16:21.812 [2024-07-15 20:52:43.694044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.812 [2024-07-15 20:52:43.694070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:21.812 [2024-07-15 20:52:43.704564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f4298 00:16:21.812 [2024-07-15 20:52:43.706194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.812 [2024-07-15 20:52:43.706222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:21.812 [2024-07-15 20:52:43.716745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f3a28 00:16:21.812 [2024-07-15 20:52:43.718362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.812 [2024-07-15 20:52:43.718389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:22.071 [2024-07-15 20:52:43.728922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f31b8 00:16:22.072 [2024-07-15 20:52:43.730527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.730553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.741069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f2948 00:16:22.072 [2024-07-15 20:52:43.742656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.742683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.753279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f20d8 00:16:22.072 [2024-07-15 20:52:43.754851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.754879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.765449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f1868 00:16:22.072 [2024-07-15 20:52:43.766997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.767025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.777638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f0ff8 00:16:22.072 [2024-07-15 20:52:43.779184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.779207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.789824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f0788 00:16:22.072 [2024-07-15 20:52:43.791355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.791381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.801979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190eff18 00:16:22.072 [2024-07-15 20:52:43.803487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.803512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.814140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ef6a8 00:16:22.072 [2024-07-15 20:52:43.815626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.815652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.826325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190eee38 00:16:22.072 [2024-07-15 20:52:43.827792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.827819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.838482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ee5c8 00:16:22.072 [2024-07-15 20:52:43.839936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.839962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.850652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190edd58 00:16:22.072 [2024-07-15 20:52:43.852093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.852118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.862811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ed4e8 00:16:22.072 [2024-07-15 20:52:43.864241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.864267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.874966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ecc78 00:16:22.072 [2024-07-15 20:52:43.876383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.876410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.887120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ec408 00:16:22.072 [2024-07-15 20:52:43.888521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.888548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.899289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ebb98 00:16:22.072 [2024-07-15 20:52:43.900672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.900698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.911451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190eb328 00:16:22.072 [2024-07-15 20:52:43.912816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.912841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.923623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190eaab8 00:16:22.072 [2024-07-15 20:52:43.924971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.924998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.935824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ea248 00:16:22.072 [2024-07-15 20:52:43.937158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.937198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.948004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e99d8 00:16:22.072 [2024-07-15 20:52:43.949332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.949358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.960192] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e9168 00:16:22.072 [2024-07-15 20:52:43.961496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.961521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:22.072 [2024-07-15 20:52:43.972383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e88f8 00:16:22.072 [2024-07-15 20:52:43.973682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.072 [2024-07-15 20:52:43.973707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:22.332 [2024-07-15 20:52:43.984552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e8088 00:16:22.332 [2024-07-15 20:52:43.985833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.332 [2024-07-15 20:52:43.985859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:22.332 [2024-07-15 20:52:43.996738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e7818 00:16:22.332 [2024-07-15 20:52:43.997993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.332 [2024-07-15 20:52:43.998019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:22.332 [2024-07-15 20:52:44.008893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e6fa8 00:16:22.332 [2024-07-15 20:52:44.010141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.332 [2024-07-15 20:52:44.010178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:22.332 [2024-07-15 20:52:44.021027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e6738 00:16:22.332 [2024-07-15 20:52:44.022274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.332 [2024-07-15 20:52:44.022299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:22.332 [2024-07-15 20:52:44.033235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e5ec8 00:16:22.332 [2024-07-15 20:52:44.034457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.332 [2024-07-15 20:52:44.034483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:22.332 [2024-07-15 20:52:44.045417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e5658 00:16:22.332 [2024-07-15 20:52:44.046620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.332 [2024-07-15 20:52:44.046646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.057581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e4de8 00:16:22.333 [2024-07-15 20:52:44.058769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.058795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.069875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e4578 00:16:22.333 [2024-07-15 20:52:44.071060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.071087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.081984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e3d08 00:16:22.333 [2024-07-15 20:52:44.083153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.083184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.094191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e3498 00:16:22.333 [2024-07-15 20:52:44.095326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.095353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.106323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e2c28 00:16:22.333 [2024-07-15 20:52:44.107443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.107470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.118483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e23b8 00:16:22.333 [2024-07-15 20:52:44.119588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.119614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.130656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e1b48 00:16:22.333 [2024-07-15 20:52:44.131747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.131772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.142814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e12d8 00:16:22.333 [2024-07-15 20:52:44.143891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.143917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.154977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e0a68 00:16:22.333 [2024-07-15 20:52:44.156034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.156059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.167131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e01f8 00:16:22.333 [2024-07-15 20:52:44.168186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.168203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.179273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190df988 00:16:22.333 [2024-07-15 20:52:44.180309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.180334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.191432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190df118 00:16:22.333 [2024-07-15 20:52:44.192449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.192474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.203575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190de8a8 00:16:22.333 [2024-07-15 20:52:44.204578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.204602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.215744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190de038 00:16:22.333 [2024-07-15 20:52:44.216731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.216757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:22.333 [2024-07-15 20:52:44.232923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190de038 00:16:22.333 [2024-07-15 20:52:44.234864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.333 [2024-07-15 20:52:44.234890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.245067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190de8a8 00:16:22.593 [2024-07-15 20:52:44.246988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.593 [2024-07-15 20:52:44.247014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.257250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190df118 00:16:22.593 [2024-07-15 20:52:44.259174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.593 [2024-07-15 20:52:44.259196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.269389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190df988 00:16:22.593 [2024-07-15 20:52:44.271296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.593 [2024-07-15 20:52:44.271321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.281605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e01f8 00:16:22.593 [2024-07-15 20:52:44.283482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.593 [2024-07-15 20:52:44.283507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.293777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e0a68 00:16:22.593 [2024-07-15 20:52:44.295637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.593 [2024-07-15 20:52:44.295662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.305914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e12d8 00:16:22.593 [2024-07-15 20:52:44.307778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.593 [2024-07-15 20:52:44.307803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.318079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e1b48 00:16:22.593 [2024-07-15 20:52:44.319900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.593 [2024-07-15 20:52:44.319926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.330222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e23b8 00:16:22.593 [2024-07-15 20:52:44.332021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.593 [2024-07-15 20:52:44.332047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.342396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e2c28 00:16:22.593 [2024-07-15 20:52:44.344188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.593 [2024-07-15 20:52:44.344214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.354551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e3498 00:16:22.593 [2024-07-15 20:52:44.356327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.593 [2024-07-15 20:52:44.356353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.366696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e3d08 00:16:22.593 [2024-07-15 20:52:44.368459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.593 [2024-07-15 20:52:44.368484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:22.593 [2024-07-15 20:52:44.378855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e4578 00:16:22.594 [2024-07-15 20:52:44.380608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.594 [2024-07-15 20:52:44.380633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:22.594 [2024-07-15 20:52:44.390994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e4de8 00:16:22.594 [2024-07-15 20:52:44.392726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.594 [2024-07-15 20:52:44.392751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:22.594 [2024-07-15 20:52:44.403126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e5658 00:16:22.594 [2024-07-15 20:52:44.404850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.594 [2024-07-15 20:52:44.404875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:22.594 [2024-07-15 20:52:44.415305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e5ec8 00:16:22.594 [2024-07-15 20:52:44.417001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.594 [2024-07-15 20:52:44.417027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:22.594 [2024-07-15 20:52:44.427455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e6738 00:16:22.594 [2024-07-15 20:52:44.429138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.594 [2024-07-15 20:52:44.429177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:22.594 [2024-07-15 20:52:44.439616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e6fa8 00:16:22.594 [2024-07-15 20:52:44.441300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.594 [2024-07-15 20:52:44.441324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:22.594 [2024-07-15 20:52:44.451824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e7818 00:16:22.594 [2024-07-15 20:52:44.453488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.594 [2024-07-15 20:52:44.453514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:22.594 [2024-07-15 20:52:44.463988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e8088 00:16:22.594 [2024-07-15 20:52:44.465637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.594 [2024-07-15 20:52:44.465661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:22.594 [2024-07-15 20:52:44.476193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e88f8 00:16:22.594 [2024-07-15 20:52:44.477817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.594 [2024-07-15 20:52:44.477842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:22.594 [2024-07-15 20:52:44.488353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e9168 00:16:22.594 [2024-07-15 20:52:44.489961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.594 [2024-07-15 20:52:44.489986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:22.594 [2024-07-15 20:52:44.500645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190e99d8 00:16:22.853 [2024-07-15 20:52:44.502254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.853 [2024-07-15 20:52:44.502279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:22.853 [2024-07-15 20:52:44.512819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ea248 00:16:22.853 [2024-07-15 20:52:44.514412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.853 [2024-07-15 20:52:44.514438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:22.853 [2024-07-15 20:52:44.524985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190eaab8 00:16:22.853 [2024-07-15 20:52:44.526560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.853 [2024-07-15 20:52:44.526584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:22.853 [2024-07-15 20:52:44.537181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190eb328 00:16:22.853 [2024-07-15 20:52:44.538733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.853 [2024-07-15 20:52:44.538760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.549405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ebb98 00:16:22.854 [2024-07-15 20:52:44.550950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.550976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.561587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ec408 00:16:22.854 [2024-07-15 20:52:44.563113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.563137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.573753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ecc78 00:16:22.854 [2024-07-15 20:52:44.575268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.575292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.585933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ed4e8 00:16:22.854 [2024-07-15 20:52:44.587437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.587464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.598111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190edd58 00:16:22.854 [2024-07-15 20:52:44.599592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.599618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.610269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ee5c8 00:16:22.854 [2024-07-15 20:52:44.611729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.611754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.622400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190eee38 00:16:22.854 [2024-07-15 20:52:44.623839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.623864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.634581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ef6a8 00:16:22.854 [2024-07-15 20:52:44.636007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.636033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.646768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190eff18 00:16:22.854 [2024-07-15 20:52:44.648193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.648218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.658935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f0788 00:16:22.854 [2024-07-15 20:52:44.660343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.660370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.671112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f0ff8 00:16:22.854 [2024-07-15 20:52:44.672502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.672526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.683273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f1868 00:16:22.854 [2024-07-15 20:52:44.684646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.684671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.695430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f20d8 00:16:22.854 [2024-07-15 20:52:44.696781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.696807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.707572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f2948 00:16:22.854 [2024-07-15 20:52:44.708910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.708935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.719714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f31b8 00:16:22.854 [2024-07-15 20:52:44.721039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.721064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.731895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f3a28 00:16:22.854 [2024-07-15 20:52:44.733216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.733240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.744024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f4298 00:16:22.854 [2024-07-15 20:52:44.745327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.745352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:22.854 [2024-07-15 20:52:44.756215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f4b08 00:16:22.854 [2024-07-15 20:52:44.757495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.854 [2024-07-15 20:52:44.757521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:23.113 [2024-07-15 20:52:44.768387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f5378 00:16:23.113 [2024-07-15 20:52:44.769649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.113 [2024-07-15 20:52:44.769674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:23.113 [2024-07-15 20:52:44.780548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f5be8 00:16:23.113 [2024-07-15 20:52:44.781792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.113 [2024-07-15 20:52:44.781817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:23.113 [2024-07-15 20:52:44.792730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f6458 00:16:23.113 [2024-07-15 20:52:44.793964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.113 [2024-07-15 20:52:44.793990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:23.113 [2024-07-15 20:52:44.804883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f6cc8 00:16:23.113 [2024-07-15 20:52:44.806111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.806133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.817035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f7538 00:16:23.114 [2024-07-15 20:52:44.818253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.818277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.829201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f7da8 00:16:23.114 [2024-07-15 20:52:44.830397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.830421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.841338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f8618 00:16:23.114 [2024-07-15 20:52:44.842519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.842545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.853581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f8e88 00:16:23.114 [2024-07-15 20:52:44.854751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.854776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.865769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f96f8 00:16:23.114 [2024-07-15 20:52:44.866925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.866950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.877921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f9f68 00:16:23.114 [2024-07-15 20:52:44.879057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.879081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.890084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fa7d8 00:16:23.114 [2024-07-15 20:52:44.891206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.891230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.902215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fb048 00:16:23.114 [2024-07-15 20:52:44.903313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.903339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.914392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fb8b8 00:16:23.114 [2024-07-15 20:52:44.915474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.915499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.926531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fc128 00:16:23.114 [2024-07-15 20:52:44.927597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.927622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.938670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fc998 00:16:23.114 [2024-07-15 20:52:44.939728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.939753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.950816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fd208 00:16:23.114 [2024-07-15 20:52:44.951857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.951884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.962977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fda78 00:16:23.114 [2024-07-15 20:52:44.964008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.964033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.975173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fe2e8 00:16:23.114 [2024-07-15 20:52:44.976183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.976204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:44.987323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190feb58 00:16:23.114 [2024-07-15 20:52:44.988321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:44.988343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:45.004565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fef90 00:16:23.114 [2024-07-15 20:52:45.006517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:45.006544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.114 [2024-07-15 20:52:45.016702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190feb58 00:16:23.114 [2024-07-15 20:52:45.018629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.114 [2024-07-15 20:52:45.018654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:23.373 [2024-07-15 20:52:45.028820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fe2e8 00:16:23.373 [2024-07-15 20:52:45.030750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.373 [2024-07-15 20:52:45.030776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:23.373 [2024-07-15 20:52:45.040976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fda78 00:16:23.373 [2024-07-15 20:52:45.042875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.373 [2024-07-15 20:52:45.042902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:23.373 [2024-07-15 20:52:45.053110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fd208 00:16:23.373 [2024-07-15 20:52:45.054995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.373 [2024-07-15 20:52:45.055021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:23.373 [2024-07-15 20:52:45.065269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fc998 00:16:23.373 [2024-07-15 20:52:45.067129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.373 [2024-07-15 20:52:45.067154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:23.373 [2024-07-15 20:52:45.077453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fc128 00:16:23.373 [2024-07-15 20:52:45.079312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.373 [2024-07-15 20:52:45.079336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:23.373 [2024-07-15 20:52:45.089598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fb8b8 00:16:23.373 [2024-07-15 20:52:45.091438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.373 [2024-07-15 20:52:45.091463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.101735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fb048 00:16:23.374 [2024-07-15 20:52:45.103561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.103587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.113855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190fa7d8 00:16:23.374 [2024-07-15 20:52:45.115666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.115690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.125941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f9f68 00:16:23.374 [2024-07-15 20:52:45.127758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.127782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.138052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f96f8 00:16:23.374 [2024-07-15 20:52:45.139819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.139845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.150152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f8e88 00:16:23.374 [2024-07-15 20:52:45.151909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.151933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.162284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f8618 00:16:23.374 [2024-07-15 20:52:45.164016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.164042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.174438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f7da8 00:16:23.374 [2024-07-15 20:52:45.176159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.176197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.186624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f7538 00:16:23.374 [2024-07-15 20:52:45.188339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.188363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.198761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f6cc8 00:16:23.374 [2024-07-15 20:52:45.200459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.200484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.210945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f6458 00:16:23.374 [2024-07-15 20:52:45.212627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.212652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.223088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f5be8 00:16:23.374 [2024-07-15 20:52:45.224759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.224784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.235231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f5378 00:16:23.374 [2024-07-15 20:52:45.236875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.236901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.247399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f4b08 00:16:23.374 [2024-07-15 20:52:45.249030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.249056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.259553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f4298 00:16:23.374 [2024-07-15 20:52:45.261177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.261202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:23.374 [2024-07-15 20:52:45.271711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f3a28 00:16:23.374 [2024-07-15 20:52:45.273331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.374 [2024-07-15 20:52:45.273356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:23.633 [2024-07-15 20:52:45.283898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f31b8 00:16:23.633 [2024-07-15 20:52:45.285492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.633 [2024-07-15 20:52:45.285516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:23.633 [2024-07-15 20:52:45.296064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f2948 00:16:23.633 [2024-07-15 20:52:45.297643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.633 [2024-07-15 20:52:45.297669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:23.633 [2024-07-15 20:52:45.308237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f20d8 00:16:23.633 [2024-07-15 20:52:45.309790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.633 [2024-07-15 20:52:45.309815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:23.634 [2024-07-15 20:52:45.320373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f1868 00:16:23.634 [2024-07-15 20:52:45.321918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.634 [2024-07-15 20:52:45.321944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:23.634 [2024-07-15 20:52:45.332542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f0ff8 00:16:23.634 [2024-07-15 20:52:45.334073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.634 [2024-07-15 20:52:45.334099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:23.634 [2024-07-15 20:52:45.344690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190f0788 00:16:23.634 [2024-07-15 20:52:45.346213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.634 [2024-07-15 20:52:45.346233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:23.634 [2024-07-15 20:52:45.356831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190eff18 00:16:23.634 [2024-07-15 20:52:45.358338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.634 [2024-07-15 20:52:45.358362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:23.634 [2024-07-15 20:52:45.369014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ef6a8 00:16:23.634 [2024-07-15 20:52:45.370512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.634 [2024-07-15 20:52:45.370538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:23.634 [2024-07-15 20:52:45.381199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190eee38 00:16:23.634 [2024-07-15 20:52:45.382671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.634 [2024-07-15 20:52:45.382697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:23.634 [2024-07-15 20:52:45.393349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ee5c8 00:16:23.634 [2024-07-15 20:52:45.394807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.634 [2024-07-15 20:52:45.394834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.634 [2024-07-15 20:52:45.405501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190edd58 00:16:23.634 [2024-07-15 20:52:45.406943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.634 [2024-07-15 20:52:45.406969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:23.634 [2024-07-15 20:52:45.417649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f3a0) with pdu=0x2000190ed4e8 00:16:23.634 [2024-07-15 20:52:45.419076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.634 [2024-07-15 20:52:45.419101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:23.634 00:16:23.634 Latency(us) 00:16:23.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.634 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.634 nvme0n1 : 2.00 20721.03 80.94 0.00 0.00 6172.61 5658.73 23792.99 00:16:23.634 =================================================================================================================== 00:16:23.634 Total : 20721.03 80.94 0.00 0.00 6172.61 5658.73 23792.99 00:16:23.634 0 00:16:23.634 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:23.634 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:23.634 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:23.634 | .driver_specific 00:16:23.634 | .nvme_error 00:16:23.634 | .status_code 00:16:23.634 | .command_transient_transport_error' 00:16:23.634 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:23.892 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:16:23.892 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79847 00:16:23.892 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79847 ']' 00:16:23.892 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79847 00:16:23.892 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:16:23.892 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.892 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79847 00:16:23.892 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:23.892 killing process with pid 79847 00:16:23.892 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:23.892 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79847' 00:16:23.892 Received shutdown signal, test time was about 2.000000 seconds 00:16:23.892 00:16:23.893 Latency(us) 00:16:23.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.893 =================================================================================================================== 00:16:23.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:23.893 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79847 00:16:23.893 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79847 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79902 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79902 /var/tmp/bperf.sock 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79902 ']' 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.151 20:52:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:24.151 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:24.151 Zero copy mechanism will not be used. 00:16:24.151 [2024-07-15 20:52:45.910803] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:16:24.151 [2024-07-15 20:52:45.910862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79902 ] 00:16:24.151 [2024-07-15 20:52:46.040208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.409 [2024-07-15 20:52:46.122658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.409 [2024-07-15 20:52:46.163111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:24.976 20:52:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.976 20:52:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:24.976 20:52:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:24.976 20:52:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:25.235 20:52:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:25.235 20:52:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.235 20:52:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:25.235 20:52:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.235 20:52:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:25.235 20:52:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:25.235 nvme0n1 00:16:25.496 20:52:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:25.496 20:52:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.496 20:52:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:25.496 20:52:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.496 20:52:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:25.496 20:52:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:25.496 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:25.496 Zero copy mechanism will not be used. 00:16:25.496 Running I/O for 2 seconds... 00:16:25.496 [2024-07-15 20:52:47.251932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.252293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.252320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.255563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.255640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.255662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.259329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.259382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.259403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.262992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.263054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.263076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.266801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.266858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.266879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.270592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.270647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.270667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.274352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.274441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.274461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.278104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.278257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.278277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.281451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.281687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.281706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.285005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.285062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.285081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.288774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.288835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.288854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.292581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.292633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.292653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.296462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.296517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.296537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.300296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.300354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.300374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.304158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.304227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.304246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.307915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.307974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.307993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.311703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.311765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.311784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.315525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.315588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.315607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.318937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.319279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.319299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.322596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.322667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.322686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.326345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.326395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.326414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.330121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.330187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.330207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.333843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.333904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.333923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.496 [2024-07-15 20:52:47.337615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.496 [2024-07-15 20:52:47.337691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.496 [2024-07-15 20:52:47.337710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.341444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.341498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.341517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.345213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.345280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.345300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.348989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.349043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.349063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.352365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.352685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.352704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.355975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.356054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.356076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.359696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.359750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.359770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.363438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.363489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.363509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.367175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.367241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.367261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.370880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.370964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.370983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.374580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.374708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.374728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.378302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.378437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.378456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.382111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.382269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.382289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.385568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.385846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.385864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.389206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.389261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.389280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.392956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.393014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.393034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.396742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.396794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.396815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.497 [2024-07-15 20:52:47.400479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.497 [2024-07-15 20:52:47.400554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.497 [2024-07-15 20:52:47.400574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.404205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.404256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.404275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.407951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.408038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.408057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.411667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.411753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.411772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.415458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.415522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.415541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.418823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.419158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.419188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.422480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.422552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.422571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.426236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.426285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.426304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.429973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.430036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.430055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.433762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.433818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.433837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.437529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.437580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.437599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.441256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.441342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.441361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.444952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.445081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.445100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.448375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.448621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.448644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.451942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.451993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.452012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.455645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.455696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.455715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.459433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.459485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.459505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.463157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.463255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.463275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.756 [2024-07-15 20:52:47.466900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.756 [2024-07-15 20:52:47.466952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.756 [2024-07-15 20:52:47.466971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.470668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.470717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.470737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.474381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.474458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.474478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.478070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.478122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.478142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.481492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.481836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.481855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.485203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.485274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.485293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.488926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.488981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.489000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.492653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.492708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.492727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.496439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.496494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.496513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.500217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.500274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.500293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.504012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.504086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.504105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.507750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.507884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.507903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.511150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.511390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.511409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.514725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.514777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.514797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.518603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.518674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.518693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.522316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.522368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.522388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.526057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.526129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.526148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.529855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.529911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.529931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.533620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.533674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.533693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.537384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.537449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.537469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.541091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.541238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.541258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.544532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.544781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.544800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.548091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.548149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.548180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.551851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.551914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.551933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.555746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.555799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.555818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.559620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.757 [2024-07-15 20:52:47.559684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.757 [2024-07-15 20:52:47.559703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.757 [2024-07-15 20:52:47.563416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.563474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.563495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.567139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.567203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.567223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.570897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.570948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.570968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.574620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.574692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.574711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.578376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.578431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.578450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.581732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.582071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.582095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.585384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.585454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.585473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.589121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.589181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.589201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.592966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.593016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.593035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.596811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.596865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.596884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.600590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.600653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.600672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.604389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.604474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.604494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.608141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.608252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.608271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.611906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.612010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.612029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.615332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.615596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.615615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.618939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.618991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.619011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.622665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.622716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.622735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.626377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.626430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.626449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.630037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.630096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.630116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.633853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.633920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.633940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.637642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.637699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.637719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.641442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.641496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.641515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.645192] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.645350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.645370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.648658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.648935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.648959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.652299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.652348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.652368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.758 [2024-07-15 20:52:47.655990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.758 [2024-07-15 20:52:47.656055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.758 [2024-07-15 20:52:47.656075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.759 [2024-07-15 20:52:47.659787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.759 [2024-07-15 20:52:47.659846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.759 [2024-07-15 20:52:47.659866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.759 [2024-07-15 20:52:47.663514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:25.759 [2024-07-15 20:52:47.663573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.759 [2024-07-15 20:52:47.663593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.050 [2024-07-15 20:52:47.667357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.050 [2024-07-15 20:52:47.667414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.050 [2024-07-15 20:52:47.667435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.050 [2024-07-15 20:52:47.671269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.050 [2024-07-15 20:52:47.671327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.050 [2024-07-15 20:52:47.671346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.050 [2024-07-15 20:52:47.675116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.050 [2024-07-15 20:52:47.675189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.050 [2024-07-15 20:52:47.675208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.050 [2024-07-15 20:52:47.678849] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.050 [2024-07-15 20:52:47.678909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.050 [2024-07-15 20:52:47.678928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.050 [2024-07-15 20:52:47.682555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.682722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.682741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.685979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.686270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.686289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.689618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.689672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.689691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.693374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.693441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.693461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.697147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.697214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.697233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.700875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.700971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.700991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.704624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.704705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.704725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.708396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.708515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.708534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.712204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.712286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.712306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.716011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.716188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.716207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.719505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.719767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.719786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.723080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.723144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.723175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.726777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.726837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.726855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.730584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.730646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.730665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.734322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.734379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.734398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.738047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.738140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.738160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.741895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.741964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.741982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.745825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.745902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.745922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.749658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.749777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.749796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.753435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.753571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.753592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.756866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.757121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.757140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.760481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.760539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.760559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.764212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.764269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.764287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.767969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.768055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.768075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.771712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.771796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.771817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.775472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.775530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.775549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.779258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.779316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.779335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.782982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.783067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.783087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.786719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.786782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.786801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.790015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.790345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.790370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.793659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.793735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.793755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.797417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.797479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.797499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.801301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.801359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.801379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.805019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.805077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.805096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.808764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.808821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.808841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.812517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.812640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.812659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.816253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.816396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.816415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.819982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.820141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.820160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.823463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.823742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.823761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.827077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.827133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.827151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.830858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.830922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.830941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.834661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.834719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.834738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.838421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.838487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.838506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.842207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.842268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.842287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.845925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.845986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.846005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.849705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.849775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.849794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.853503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.853564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.853584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.857053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.857394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.857421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.860634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.860712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.860731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.864345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.864399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.864418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.868071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.868131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.868150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.871837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.871897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.871916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.875651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.875730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.875751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.879453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.879528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.051 [2024-07-15 20:52:47.879548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.051 [2024-07-15 20:52:47.883202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.051 [2024-07-15 20:52:47.883343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.883364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.887019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.887156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.887195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.890519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.890794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.890813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.894130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.894200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.894219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.897890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.897944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.897963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.901675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.901729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.901749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.905453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.905533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.905553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.909220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.909278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.909296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.912939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.913023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.913042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.916671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.916746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.916766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.920539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.920615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.920635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.924402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.924562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.924581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.928284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.928444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.928463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.931758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.932028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.932048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.935399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.935460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.935479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.939216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.939283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.939302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.942957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.943017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.943036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.946744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.946806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.946826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.950525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.950593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.950612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.954249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.954322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.954341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.052 [2024-07-15 20:52:47.958040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.052 [2024-07-15 20:52:47.958128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.052 [2024-07-15 20:52:47.958147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:47.961796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:47.961916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:47.961934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:47.965141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:47.965392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:47.965418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:47.968705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:47.968762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:47.968781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:47.972464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:47.972530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:47.972549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:47.976270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:47.976327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:47.976346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:47.980116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:47.980183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:47.980205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:47.984077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:47.984145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:47.984174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:47.987911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:47.987968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:47.987987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:47.991688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:47.991752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:47.991773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:47.995450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:47.995515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:47.995535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:47.999149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:47.999232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:47.999252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.002934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.002994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.003014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.006323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.006655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.006682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.009990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.010072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.010091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.013595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.013642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.013662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.017402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.017482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.017501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.021157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.021229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.021247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.024936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.025016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.025036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.028730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.028814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.028833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.032525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.032673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.032692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.035906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.036173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.036202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.039493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.039554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.039573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.043250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.043311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.043330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.046984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.047040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.047059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.050852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.050908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.050927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.054659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.054720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.054740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.058453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.058508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.058527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.062219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.062300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.062318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.065953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.066104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.066123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.069342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.069595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.069613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.072901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.072961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.072980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.076591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.076673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.076693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.080392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.080465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.080484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.084196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.310 [2024-07-15 20:52:48.084293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.310 [2024-07-15 20:52:48.084314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.310 [2024-07-15 20:52:48.087926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.087989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.088008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.091678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.091753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.091779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.095440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.095579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.095599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.099229] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.099407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.099426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.102701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.102972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.102991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.106275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.106332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.106351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.110042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.110094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.110113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.113786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.113839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.113858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.117523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.117605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.117624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.121275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.121330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.121349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.125024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.125092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.125111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.128802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.128875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.128894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.132682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.132789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.132808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.136079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.136329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.136349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.139651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.139711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.139730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.143421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.143479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.143497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.147190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.147254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.147273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.150935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.150997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.151015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.154744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.154802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.154821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.158496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.158572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.158591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.162313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.162394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.162413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.166086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.166141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.166160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.169453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.169771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.169806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.173065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.173150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.173181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.176848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.176906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.176925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.180574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.180643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.180662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.184342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.184424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.184443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.188150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.188262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.188281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.191952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.192027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.192047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.195692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.195837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.195858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.199083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.199323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.199342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.202627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.202688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.202707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.206395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.206453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.206472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.210148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.210214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.210233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.213992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.214069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.214089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.311 [2024-07-15 20:52:48.217837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.311 [2024-07-15 20:52:48.217895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.311 [2024-07-15 20:52:48.217914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.569 [2024-07-15 20:52:48.221624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.569 [2024-07-15 20:52:48.221728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.569 [2024-07-15 20:52:48.221748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.569 [2024-07-15 20:52:48.225404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.569 [2024-07-15 20:52:48.225473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.569 [2024-07-15 20:52:48.225493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.569 [2024-07-15 20:52:48.229143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.569 [2024-07-15 20:52:48.229235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.569 [2024-07-15 20:52:48.229254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.232956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.233139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.233159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.236851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.236995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.237015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.240757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.240897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.240916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.244281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.244560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.244579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.247894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.247956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.247974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.251684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.251741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.251760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.255407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.255478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.255497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.259140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.259227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.259246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.262896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.262980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.263000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.266678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.266823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.266844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.270415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.270478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.270497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.274103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.274159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.274189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.277398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.277747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.277775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.281020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.281099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.281118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.284783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.284841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.284861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.288522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.288584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.288603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.292414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.292506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.292525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.296210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.296271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.296290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.299958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.300059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.300079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.303704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.303811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.303830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.307458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.307599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.307618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.310902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.311155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.311194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.314496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.314555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.314573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.318233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.318292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.318312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.321945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.321999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.322018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.325681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.325744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.325764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.329494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.329554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.329574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.333272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.333333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.333353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.337023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.337087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.337107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.340814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.340868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.340888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.344163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.344511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.344537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.347806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.347899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.347918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.351541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.351619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.351638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.355278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.355337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.355356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.359109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.359177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.359196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.362965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.363029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.363050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.366767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.366825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.366844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.370527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.370597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.370616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.374243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.374305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.374324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.377660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.378005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.570 [2024-07-15 20:52:48.378035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.570 [2024-07-15 20:52:48.381293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.570 [2024-07-15 20:52:48.381367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.381387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.384973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.385033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.385051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.388702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.388763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.388782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.392487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.392568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.392587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.396280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.396377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.396396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.400062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.400157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.400188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.403812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.403954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.403974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.407210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.407459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.407478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.410802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.410864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.410882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.414477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.414541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.414561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.418198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.418251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.418271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.421939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.421992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.422012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.425690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.425742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.425761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.429473] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.429545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.429565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.433209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.433283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.433302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.436915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.437082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.437101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.440878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.441011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.441030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.444346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.444609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.444628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.447930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.447990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.448009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.451657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.451724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.451743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.455435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.455493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.455513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.459179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.459260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.459279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.462933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.462994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.463016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.466726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.466795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.466816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.470517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.470624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.470644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.474245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.474369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.474388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.571 [2024-07-15 20:52:48.477584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.571 [2024-07-15 20:52:48.477810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.571 [2024-07-15 20:52:48.477828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.481150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.481218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.481237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.484907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.484969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.484989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.488695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.488753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.488772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.492464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.492551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.492570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.496189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.496247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.496267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.499990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.500050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.500069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.503763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.503836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.503855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.507574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.507768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.507788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.511498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.511629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.511648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.514918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.515212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.515231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.518563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.518623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.518643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.522337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.522395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.522414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.526097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.526150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.526181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.529825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.529905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.529924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.533552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.533622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.533641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.537333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.537398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.537418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.541079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.541152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.541183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.544855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.545027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.545047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.548380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.548654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.548673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.552014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.552074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.552093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.555751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.831 [2024-07-15 20:52:48.555811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.831 [2024-07-15 20:52:48.555831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.831 [2024-07-15 20:52:48.559485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.559541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.559560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.563234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.563293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.563313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.566991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.567050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.567070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.570754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.570820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.570839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.574528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.574612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.574632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.578290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.578356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.578375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.581673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.582005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.582039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.585298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.585380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.585399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.589092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.589156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.589188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.593092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.593149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.593182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.596863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.596939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.596958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.600636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.600720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.600740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.604449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.604527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.604547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.608257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.608317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.608336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.611967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.612035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.612055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.615731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.615789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.615808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.619111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.619449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.619476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.622801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.622879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.622898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.626549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.626602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.626622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.630265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.630319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.630338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.634115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.634202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.634222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.637860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.637911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.637930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.641628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.641694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.641714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.645335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.832 [2024-07-15 20:52:48.645405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.832 [2024-07-15 20:52:48.645425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.832 [2024-07-15 20:52:48.649119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.649198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.649218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.652505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.652842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.652870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.656187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.656263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.656282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.659920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.659980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.659999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.663660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.663736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.663755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.667428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.667483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.667503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.671188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.671257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.671276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.675002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.675083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.675102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.678841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.678927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.678945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.682688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.682809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.682828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.686077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.686332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.686353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.689689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.689744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.689763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.693426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.693491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.693510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.697203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.697260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.697279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.701000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.701059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.701078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.704749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.704814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.704833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.708508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.708570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.708590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.712227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.712299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.712318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.715979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.716043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.716062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.719357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.719687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.719714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.723008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.723088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.723107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.726787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.726846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.726865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.730578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.730636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.730655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.833 [2024-07-15 20:52:48.734378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:26.833 [2024-07-15 20:52:48.734439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.833 [2024-07-15 20:52:48.734459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.738138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.738220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.095 [2024-07-15 20:52:48.738240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.741864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.741940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.095 [2024-07-15 20:52:48.741959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.745562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.745686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.095 [2024-07-15 20:52:48.745705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.749469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.749608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.095 [2024-07-15 20:52:48.749628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.752948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.753233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.095 [2024-07-15 20:52:48.753252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.756599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.756658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.095 [2024-07-15 20:52:48.756677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.760327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.760389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.095 [2024-07-15 20:52:48.760408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.764130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.764197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.095 [2024-07-15 20:52:48.764217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.767901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.767958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.095 [2024-07-15 20:52:48.767977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.771718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.771777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.095 [2024-07-15 20:52:48.771797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.775444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.775502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.095 [2024-07-15 20:52:48.775522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.095 [2024-07-15 20:52:48.779245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.095 [2024-07-15 20:52:48.779328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.779348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.783045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.783102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.783121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.786480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.786818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.786845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.790154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.790247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.790266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.793848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.793901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.793921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.797559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.797613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.797632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.801319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.801380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.801399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.805069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.805221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.805240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.808799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.808867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.808887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.812544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.812682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.812701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.815963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.816238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.816258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.819575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.819651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.819670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.823429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.823505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.823525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.827203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.827262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.827281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.830909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.830976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.830995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.834620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.834729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.834748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.838358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.838420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.838439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.842152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.842237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.842255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.845879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.845946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.845966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.849257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.849598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.096 [2024-07-15 20:52:48.849617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.096 [2024-07-15 20:52:48.852902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.096 [2024-07-15 20:52:48.852983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.853003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.856620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.856693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.856712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.860395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.860460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.860479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.864147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.864221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.864240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.867947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.868031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.868050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.871680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.871760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.871780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.875464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.875598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.875617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.878887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.879144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.879162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.882491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.882550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.882570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.886230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.886291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.886311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.889852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.889925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.889944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.893622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.893692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.893712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.897408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.897493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.897513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.901147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.901243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.901263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.904958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.905026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.905045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.908818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.908943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.908962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.912261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.912510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.912529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.915833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.915892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.915912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.919588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.919649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.919669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.923380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.923439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.923458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.927180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.097 [2024-07-15 20:52:48.927237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.097 [2024-07-15 20:52:48.927256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.097 [2024-07-15 20:52:48.930933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.930999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.931019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.934642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.934761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.934780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.938418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.938486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.938505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.942177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.942243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.942263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.945537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.945868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.945895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.949210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.949296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.949315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.952967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.953022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.953042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.956750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.956815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.956834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.960559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.960620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.960639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.964357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.964457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.964476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.968263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.968325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.968344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.972024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.972169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.972199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.975468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.975743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.975761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.979130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.979208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.979227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.982881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.982948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.982967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.986613] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.986673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.986692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.990338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.990394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.990413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.994130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.994202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.994221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.098 [2024-07-15 20:52:48.998007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.098 [2024-07-15 20:52:48.998066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.098 [2024-07-15 20:52:48.998085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.001742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.001843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.001862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.005474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.005563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.005582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.009356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.009412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.009431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.012740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.013095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.013123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.016450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.016527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.016546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.020197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.020251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.020272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.023886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.023950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.023970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.027607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.027685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.027705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.031434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.031531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.031551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.035221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.035319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.035339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.038978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.039106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.039125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.042436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.042685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.042704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.046039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.046095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.046115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.049731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.049786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.049806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.053486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.053543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.053562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.057274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.057329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.057348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.061012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.061102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.061121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.064739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.064873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.064892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.068427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.068521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.068540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.072244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.072298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.072317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.075631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.075982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.076010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.079261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.079318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.079337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.083108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.083212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.083231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.086920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.086976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.086996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.090678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.090737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.090757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.094506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.094565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.094585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.098180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.098245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.098264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.101900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.101998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.102017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.105658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.105797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.105816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.109014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.109281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.109301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.112564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.112634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.112653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.116353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.116414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.116450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.120096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.358 [2024-07-15 20:52:49.120155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.358 [2024-07-15 20:52:49.120202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.358 [2024-07-15 20:52:49.123828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.123883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.123920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.127591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.127689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.127709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.131364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.131421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.131456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.135107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.135214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.135234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.138843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.138923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.138942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.142270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.142617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.142644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.145926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.146003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.146022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.149636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.149696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.149715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.153508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.153561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.153581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.157250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.157305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.157325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.160986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.161060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.161079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.164766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.164829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.164849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.168553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.168616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.168635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.172310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.172376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.172396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.175680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.176012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.176039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.179339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.179400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.179419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.183144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.183247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.183266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.186863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.186922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.186941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.190610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.190697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.190717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.194391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.194470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.194489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.198108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.198225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.198245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.201750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.201837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.201857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.205484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.205563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.205582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.208828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.209148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.209167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.212350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.212440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.212459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.216071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.216131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.216149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.219801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.219856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.219875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.223598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.223682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.223700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.227415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.227493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.227513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.231164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.231312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.231332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.234901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.235047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.235066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.238332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.238586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.238605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.359 [2024-07-15 20:52:49.241904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2f6e0) with pdu=0x2000190fef90 00:16:27.359 [2024-07-15 20:52:49.241960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:27.359 [2024-07-15 20:52:49.241994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:27.359 00:16:27.359 Latency(us) 00:16:27.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.359 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:27.359 nvme0n1 : 2.00 8331.67 1041.46 0.00 0.00 1917.00 1230.44 4026.91 00:16:27.359 =================================================================================================================== 00:16:27.359 Total : 8331.67 1041.46 0.00 0.00 1917.00 1230.44 4026.91 00:16:27.359 0 00:16:27.359 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:27.617 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:27.617 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:27.617 | .driver_specific 00:16:27.617 | .nvme_error 00:16:27.617 | .status_code 00:16:27.617 | .command_transient_transport_error' 00:16:27.617 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:27.617 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 537 > 0 )) 00:16:27.617 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79902 00:16:27.618 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79902 ']' 00:16:27.618 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79902 00:16:27.618 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:16:27.618 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:27.618 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79902 00:16:27.618 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:27.618 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:27.618 killing process with pid 79902 00:16:27.618 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79902' 00:16:27.618 Received shutdown signal, test time was about 2.000000 seconds 00:16:27.618 00:16:27.618 Latency(us) 00:16:27.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.618 =================================================================================================================== 00:16:27.618 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.618 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79902 00:16:27.618 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79902 00:16:27.876 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79700 00:16:27.876 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79700 ']' 00:16:27.876 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79700 00:16:27.876 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:16:27.876 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:27.876 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79700 00:16:27.876 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:27.876 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:27.876 killing process with pid 79700 00:16:27.876 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79700' 00:16:27.876 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79700 00:16:27.876 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79700 00:16:28.135 00:16:28.135 real 0m16.692s 00:16:28.135 user 0m30.529s 00:16:28.135 sys 0m5.111s 00:16:28.135 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.135 20:52:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:28.135 ************************************ 00:16:28.135 END TEST nvmf_digest_error 00:16:28.135 ************************************ 00:16:28.135 20:52:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:16:28.135 20:52:49 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:28.135 20:52:49 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:16:28.135 20:52:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.135 20:52:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:16:28.135 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.135 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:16:28.135 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.135 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.135 rmmod nvme_tcp 00:16:28.135 rmmod nvme_fabrics 00:16:28.135 rmmod nvme_keyring 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79700 ']' 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79700 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 79700 ']' 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 79700 00:16:28.394 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (79700) - No such process 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 79700 is not found' 00:16:28.394 Process with pid 79700 is not found 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:28.394 00:16:28.394 real 0m34.608s 00:16:28.394 user 1m2.070s 00:16:28.394 sys 0m10.626s 00:16:28.394 ************************************ 00:16:28.394 END TEST nvmf_digest 00:16:28.394 ************************************ 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.394 20:52:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:28.394 20:52:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:28.394 20:52:50 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:16:28.394 20:52:50 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:16:28.394 20:52:50 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:28.394 20:52:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:28.394 20:52:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.394 20:52:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.394 ************************************ 00:16:28.394 START TEST nvmf_host_multipath 00:16:28.394 ************************************ 00:16:28.394 20:52:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:28.654 * Looking for test storage... 00:16:28.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.654 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:28.655 Cannot find device "nvmf_tgt_br" 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.655 Cannot find device "nvmf_tgt_br2" 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:28.655 Cannot find device "nvmf_tgt_br" 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:28.655 Cannot find device "nvmf_tgt_br2" 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:28.655 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:28.914 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:28.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:16:28.915 00:16:28.915 --- 10.0.0.2 ping statistics --- 00:16:28.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.915 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:28.915 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.915 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:16:28.915 00:16:28.915 --- 10.0.0.3 ping statistics --- 00:16:28.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.915 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:28.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:16:28.915 00:16:28.915 --- 10.0.0.1 ping statistics --- 00:16:28.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.915 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80155 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80155 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80155 ']' 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.915 20:52:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:29.174 [2024-07-15 20:52:50.848923] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:16:29.174 [2024-07-15 20:52:50.848996] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.174 [2024-07-15 20:52:50.992819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:29.174 [2024-07-15 20:52:51.082256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.174 [2024-07-15 20:52:51.082302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.174 [2024-07-15 20:52:51.082312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.174 [2024-07-15 20:52:51.082320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.174 [2024-07-15 20:52:51.082327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.174 [2024-07-15 20:52:51.082478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.174 [2024-07-15 20:52:51.082482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.432 [2024-07-15 20:52:51.123934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:29.999 20:52:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.999 20:52:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:16:29.999 20:52:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:29.999 20:52:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:29.999 20:52:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:29.999 20:52:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.999 20:52:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80155 00:16:29.999 20:52:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:30.258 [2024-07-15 20:52:51.944882] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.258 20:52:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:30.258 Malloc0 00:16:30.516 20:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:30.516 20:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:30.774 20:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.774 [2024-07-15 20:52:52.669047] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.032 20:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:31.032 [2024-07-15 20:52:52.856835] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:31.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.032 20:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80205 00:16:31.032 20:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:31.032 20:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80205 /var/tmp/bdevperf.sock 00:16:31.032 20:52:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80205 ']' 00:16:31.032 20:52:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.033 20:52:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.033 20:52:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.033 20:52:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.033 20:52:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:31.033 20:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:31.967 20:52:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.967 20:52:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:16:31.967 20:52:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:32.225 20:52:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:32.484 Nvme0n1 00:16:32.484 20:52:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:32.742 Nvme0n1 00:16:32.742 20:52:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:16:32.743 20:52:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:33.676 20:52:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:16:33.676 20:52:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:33.934 20:52:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:34.215 20:52:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:16:34.215 20:52:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80250 00:16:34.215 20:52:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:34.215 20:52:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80155 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:40.838 20:53:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:40.838 20:53:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:40.838 Attaching 4 probes... 00:16:40.838 @path[10.0.0.2, 4421]: 23280 00:16:40.838 @path[10.0.0.2, 4421]: 23735 00:16:40.838 @path[10.0.0.2, 4421]: 23716 00:16:40.838 @path[10.0.0.2, 4421]: 23749 00:16:40.838 @path[10.0.0.2, 4421]: 23737 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80250 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80368 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80155 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:40.838 20:53:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:47.438 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:47.438 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:47.438 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:47.439 Attaching 4 probes... 00:16:47.439 @path[10.0.0.2, 4420]: 23514 00:16:47.439 @path[10.0.0.2, 4420]: 23855 00:16:47.439 @path[10.0.0.2, 4420]: 23761 00:16:47.439 @path[10.0.0.2, 4420]: 23849 00:16:47.439 @path[10.0.0.2, 4420]: 23825 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80368 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:47.439 20:53:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:47.439 20:53:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:16:47.439 20:53:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80475 00:16:47.439 20:53:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:47.439 20:53:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80155 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:53.995 Attaching 4 probes... 00:16:53.995 @path[10.0.0.2, 4421]: 17866 00:16:53.995 @path[10.0.0.2, 4421]: 23384 00:16:53.995 @path[10.0.0.2, 4421]: 23374 00:16:53.995 @path[10.0.0.2, 4421]: 23384 00:16:53.995 @path[10.0.0.2, 4421]: 23349 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80475 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80593 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:53.995 20:53:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80155 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:00.581 Attaching 4 probes... 00:17:00.581 00:17:00.581 00:17:00.581 00:17:00.581 00:17:00.581 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80593 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:00.581 20:53:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:00.581 20:53:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:00.581 20:53:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:00.581 20:53:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80705 00:17:00.581 20:53:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80155 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:00.581 20:53:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:07.161 Attaching 4 probes... 00:17:07.161 @path[10.0.0.2, 4421]: 22830 00:17:07.161 @path[10.0.0.2, 4421]: 23005 00:17:07.161 @path[10.0.0.2, 4421]: 23049 00:17:07.161 @path[10.0.0.2, 4421]: 22894 00:17:07.161 @path[10.0.0.2, 4421]: 22872 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80705 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:07.161 20:53:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:17:08.110 20:53:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:08.110 20:53:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80823 00:17:08.110 20:53:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80155 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:08.110 20:53:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:14.675 20:53:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:14.675 20:53:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:14.675 20:53:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:14.675 20:53:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:14.675 Attaching 4 probes... 00:17:14.675 @path[10.0.0.2, 4420]: 22724 00:17:14.675 @path[10.0.0.2, 4420]: 23115 00:17:14.675 @path[10.0.0.2, 4420]: 23102 00:17:14.675 @path[10.0.0.2, 4420]: 23090 00:17:14.675 @path[10.0.0.2, 4420]: 23113 00:17:14.675 20:53:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:14.675 20:53:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:14.675 20:53:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:14.675 20:53:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:14.675 20:53:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:14.675 20:53:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:14.675 20:53:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80823 00:17:14.675 20:53:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:14.675 20:53:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:14.675 [2024-07-15 20:53:36.183385] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:14.675 20:53:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:14.675 20:53:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:17:21.246 20:53:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:21.246 20:53:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81003 00:17:21.246 20:53:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:21.246 20:53:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80155 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:26.565 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:26.565 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:26.824 Attaching 4 probes... 00:17:26.824 @path[10.0.0.2, 4421]: 22727 00:17:26.824 @path[10.0.0.2, 4421]: 22990 00:17:26.824 @path[10.0.0.2, 4421]: 22994 00:17:26.824 @path[10.0.0.2, 4421]: 22981 00:17:26.824 @path[10.0.0.2, 4421]: 22960 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81003 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80205 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80205 ']' 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80205 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80205 00:17:26.824 killing process with pid 80205 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80205' 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80205 00:17:26.824 20:53:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80205 00:17:27.093 Connection closed with partial response: 00:17:27.093 00:17:27.093 00:17:27.093 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80205 00:17:27.093 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:27.093 [2024-07-15 20:52:52.923774] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:17:27.093 [2024-07-15 20:52:52.923856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80205 ] 00:17:27.093 [2024-07-15 20:52:53.064850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.093 [2024-07-15 20:52:53.141497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.093 [2024-07-15 20:52:53.182280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:27.093 Running I/O for 90 seconds... 00:17:27.093 [2024-07-15 20:53:02.496012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.093 [2024-07-15 20:53:02.496076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.093 [2024-07-15 20:53:02.496136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.093 [2024-07-15 20:53:02.496177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.093 [2024-07-15 20:53:02.496207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.093 [2024-07-15 20:53:02.496237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.093 [2024-07-15 20:53:02.496267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.093 [2024-07-15 20:53:02.496297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.093 [2024-07-15 20:53:02.496326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.496816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.093 [2024-07-15 20:53:02.496828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.497057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.093 [2024-07-15 20:53:02.497072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.497090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.093 [2024-07-15 20:53:02.497102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:27.093 [2024-07-15 20:53:02.497120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.094 [2024-07-15 20:53:02.497790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.497978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.497996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:27.094 [2024-07-15 20:53:02.498413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.094 [2024-07-15 20:53:02.498426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.498455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.498486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.498516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.498551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.498582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.498612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.498642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.498673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.498703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.498733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.498763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.498794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.498824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.498854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.498883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.498917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.498947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.498978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.498996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.499008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.499039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.499085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.499115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.499145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.499186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.499216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.499247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.499277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.499307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.499343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.499373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.499404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.499422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.499434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.500545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.095 [2024-07-15 20:53:02.500574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.500598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.500611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.500629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.500646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.500664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.500681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.500699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.500712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.500730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.500743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.500761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.500773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.500791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.500803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:27.095 [2024-07-15 20:53:02.500932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.095 [2024-07-15 20:53:02.500948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.500968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.500980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.500999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.501012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.501029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.501042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.501060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.501072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.501090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.501102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.501120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.501133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.501150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.501163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.501195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.501207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.501225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.501237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.501255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.501270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.501288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.501301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:02.501319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:02.501339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.926724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.926788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.926834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.926848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.926867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.926879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.926896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.926908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.926926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.926938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.926955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.926967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.926985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.926997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.096 [2024-07-15 20:53:08.927530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.096 [2024-07-15 20:53:08.927628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.096 [2024-07-15 20:53:08.927658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.096 [2024-07-15 20:53:08.927688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.096 [2024-07-15 20:53:08.927718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.096 [2024-07-15 20:53:08.927748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:27.096 [2024-07-15 20:53:08.927766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.096 [2024-07-15 20:53:08.927778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.927796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.097 [2024-07-15 20:53:08.927808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.927826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.097 [2024-07-15 20:53:08.927838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.927860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.927872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.927891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.927904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.927921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.927933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.927951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.927964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.927986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.927999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.097 [2024-07-15 20:53:08.928375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.097 [2024-07-15 20:53:08.928405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.097 [2024-07-15 20:53:08.928435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.097 [2024-07-15 20:53:08.928465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.097 [2024-07-15 20:53:08.928494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.097 [2024-07-15 20:53:08.928524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.097 [2024-07-15 20:53:08.928554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.097 [2024-07-15 20:53:08.928584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.097 [2024-07-15 20:53:08.928857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:27.097 [2024-07-15 20:53:08.928875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.928888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.928905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.928917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.928935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.928947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.928964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.928976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.928994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.929591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.929973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.929985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.930002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.930014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.930041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.930054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.930072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.098 [2024-07-15 20:53:08.930085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.930103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.098 [2024-07-15 20:53:08.930115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:27.098 [2024-07-15 20:53:08.930133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.099 [2024-07-15 20:53:08.930145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.930163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.099 [2024-07-15 20:53:08.930182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.930200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.099 [2024-07-15 20:53:08.930212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.930230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.099 [2024-07-15 20:53:08.930242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.930260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.099 [2024-07-15 20:53:08.930276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.930294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.099 [2024-07-15 20:53:08.930306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.930906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.099 [2024-07-15 20:53:08.930927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.930954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.930967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.930992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:08.931540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:08.931552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.099 [2024-07-15 20:53:15.720779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.099 [2024-07-15 20:53:15.720809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.099 [2024-07-15 20:53:15.720841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.099 [2024-07-15 20:53:15.720877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:27.099 [2024-07-15 20:53:15.720894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.720907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.720924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.720936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.720954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.720966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.720984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.720996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.100 [2024-07-15 20:53:15.721563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.100 [2024-07-15 20:53:15.721593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.100 [2024-07-15 20:53:15.721624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.100 [2024-07-15 20:53:15.721658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.100 [2024-07-15 20:53:15.721688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.100 [2024-07-15 20:53:15.721718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.100 [2024-07-15 20:53:15.721747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.100 [2024-07-15 20:53:15.721777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.721974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.721986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.722007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.722020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.722045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.722058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.722076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.722088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.722105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.722118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.722136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.722148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.722173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.100 [2024-07-15 20:53:15.722187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:27.100 [2024-07-15 20:53:15.722204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.722309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.722339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.722369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.722403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.722433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.722462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.722493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.722522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.722974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.722992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.101 [2024-07-15 20:53:15.723004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.723025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.723037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.723054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.723067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.723084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.723096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.723114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.723126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.723148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.723160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.723187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.723200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.723217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.723230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.723247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.101 [2024-07-15 20:53:15.723259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:27.101 [2024-07-15 20:53:15.723277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.723289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.723319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.723348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.723378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.723408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.723437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.723467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.723497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.723942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.723955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.724465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.102 [2024-07-15 20:53:15.724485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.724512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.724525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.724550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.724562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.724587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.724599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.724624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.724636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.724660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.724673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.724697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.724710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.724734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.724746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:15.724779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:15.724792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:28.772977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:28.773060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:28.773105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.102 [2024-07-15 20:53:28.773119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:27.102 [2024-07-15 20:53:28.773138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.103 [2024-07-15 20:53:28.773798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.773975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.773987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.774000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.774013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.774026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.774046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.774059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.774071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.774085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.774097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.774111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.774123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.774137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.774149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.774162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.774182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.774196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.774213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.103 [2024-07-15 20:53:28.774227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.103 [2024-07-15 20:53:28.774239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.104 [2024-07-15 20:53:28.774494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.104 [2024-07-15 20:53:28.774520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.104 [2024-07-15 20:53:28.774549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.104 [2024-07-15 20:53:28.774575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.104 [2024-07-15 20:53:28.774600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.104 [2024-07-15 20:53:28.774625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.104 [2024-07-15 20:53:28.774651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.104 [2024-07-15 20:53:28.774677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.104 [2024-07-15 20:53:28.774973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.104 [2024-07-15 20:53:28.774985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.774998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.105 [2024-07-15 20:53:28.775111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.105 [2024-07-15 20:53:28.775137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.105 [2024-07-15 20:53:28.775163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.105 [2024-07-15 20:53:28.775199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.105 [2024-07-15 20:53:28.775225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.105 [2024-07-15 20:53:28.775250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.105 [2024-07-15 20:53:28.775275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.105 [2024-07-15 20:53:28.775300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.105 [2024-07-15 20:53:28.775716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.105 [2024-07-15 20:53:28.775729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.105 [2024-07-15 20:53:28.775741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.775755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.775766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.775780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.775792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.775805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.775817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.775830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.775846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.775859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.775871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.775885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.775897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.775911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.775924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.775938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.775950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.775963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.775975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.775988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.776000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.776026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.776051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.776076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.106 [2024-07-15 20:53:28.776101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a30d0 is same with the state(5) to be set 00:17:27.106 [2024-07-15 20:53:28.776128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.106 [2024-07-15 20:53:28.776137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.106 [2024-07-15 20:53:28.776146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82784 len:8 PRP1 0x0 PRP2 0x0 00:17:27.106 [2024-07-15 20:53:28.776158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.106 [2024-07-15 20:53:28.776190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.106 [2024-07-15 20:53:28.776199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83304 len:8 PRP1 0x0 PRP2 0x0 00:17:27.106 [2024-07-15 20:53:28.776211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.106 [2024-07-15 20:53:28.776232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.106 [2024-07-15 20:53:28.776241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83312 len:8 PRP1 0x0 PRP2 0x0 00:17:27.106 [2024-07-15 20:53:28.776253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.106 [2024-07-15 20:53:28.776273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.106 [2024-07-15 20:53:28.776283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83320 len:8 PRP1 0x0 PRP2 0x0 00:17:27.106 [2024-07-15 20:53:28.776295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.106 [2024-07-15 20:53:28.776317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.106 [2024-07-15 20:53:28.776326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83328 len:8 PRP1 0x0 PRP2 0x0 00:17:27.106 [2024-07-15 20:53:28.776338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.106 [2024-07-15 20:53:28.776358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.106 [2024-07-15 20:53:28.776367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83336 len:8 PRP1 0x0 PRP2 0x0 00:17:27.106 [2024-07-15 20:53:28.776379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.106 [2024-07-15 20:53:28.776400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.106 [2024-07-15 20:53:28.776409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83344 len:8 PRP1 0x0 PRP2 0x0 00:17:27.106 [2024-07-15 20:53:28.776420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.106 [2024-07-15 20:53:28.776441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.106 [2024-07-15 20:53:28.776450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83352 len:8 PRP1 0x0 PRP2 0x0 00:17:27.106 [2024-07-15 20:53:28.776461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.106 [2024-07-15 20:53:28.776473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.106 [2024-07-15 20:53:28.776482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.107 [2024-07-15 20:53:28.776491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83360 len:8 PRP1 0x0 PRP2 0x0 00:17:27.107 [2024-07-15 20:53:28.776506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.776518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.107 [2024-07-15 20:53:28.776527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.107 [2024-07-15 20:53:28.776536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83368 len:8 PRP1 0x0 PRP2 0x0 00:17:27.107 [2024-07-15 20:53:28.776547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.776559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.107 [2024-07-15 20:53:28.776568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.107 [2024-07-15 20:53:28.776577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83376 len:8 PRP1 0x0 PRP2 0x0 00:17:27.107 [2024-07-15 20:53:28.776589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.776601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.107 [2024-07-15 20:53:28.776610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.107 [2024-07-15 20:53:28.776621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83384 len:8 PRP1 0x0 PRP2 0x0 00:17:27.107 [2024-07-15 20:53:28.776633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.776647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.107 [2024-07-15 20:53:28.776656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.107 [2024-07-15 20:53:28.776665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83392 len:8 PRP1 0x0 PRP2 0x0 00:17:27.107 [2024-07-15 20:53:28.776676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.776688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.107 [2024-07-15 20:53:28.776697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.107 [2024-07-15 20:53:28.776706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83400 len:8 PRP1 0x0 PRP2 0x0 00:17:27.107 [2024-07-15 20:53:28.776717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.776729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.107 [2024-07-15 20:53:28.776738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.107 [2024-07-15 20:53:28.776747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83408 len:8 PRP1 0x0 PRP2 0x0 00:17:27.107 [2024-07-15 20:53:28.776759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.776770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.107 [2024-07-15 20:53:28.776779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.107 [2024-07-15 20:53:28.776788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83416 len:8 PRP1 0x0 PRP2 0x0 00:17:27.107 [2024-07-15 20:53:28.776800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.776812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.107 [2024-07-15 20:53:28.776821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.107 [2024-07-15 20:53:28.776833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83424 len:8 PRP1 0x0 PRP2 0x0 00:17:27.107 [2024-07-15 20:53:28.776845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.776892] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20a30d0 was disconnected and freed. reset controller. 00:17:27.107 [2024-07-15 20:53:28.776971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.107 [2024-07-15 20:53:28.776988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.777001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.107 [2024-07-15 20:53:28.777014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.777027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.107 [2024-07-15 20:53:28.777039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.777051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.107 [2024-07-15 20:53:28.777063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.777076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.107 [2024-07-15 20:53:28.777088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.107 [2024-07-15 20:53:28.777106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202e100 is same with the state(5) to be set 00:17:27.107 [2024-07-15 20:53:28.794255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:27.107 [2024-07-15 20:53:28.794304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202e100 (9): Bad file descriptor 00:17:27.107 [2024-07-15 20:53:28.794714] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:27.108 [2024-07-15 20:53:28.794745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202e100 with addr=10.0.0.2, port=4421 00:17:27.108 [2024-07-15 20:53:28.794763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202e100 is same with the state(5) to be set 00:17:27.108 [2024-07-15 20:53:28.794826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202e100 (9): Bad file descriptor 00:17:27.108 [2024-07-15 20:53:28.794857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:27.108 [2024-07-15 20:53:28.794874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:27.108 [2024-07-15 20:53:28.794891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:27.108 [2024-07-15 20:53:28.794923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.108 [2024-07-15 20:53:28.794938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:27.108 [2024-07-15 20:53:38.830525] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:27.108 Received shutdown signal, test time was about 54.186083 seconds 00:17:27.108 00:17:27.108 Latency(us) 00:17:27.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.108 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:27.108 Verification LBA range: start 0x0 length 0x4000 00:17:27.108 Nvme0n1 : 54.19 9859.94 38.52 0.00 0.00 12966.89 789.59 7061253.96 00:17:27.108 =================================================================================================================== 00:17:27.108 Total : 9859.94 38.52 0.00 0.00 12966.89 789.59 7061253.96 00:17:27.108 20:53:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:27.367 rmmod nvme_tcp 00:17:27.367 rmmod nvme_fabrics 00:17:27.367 rmmod nvme_keyring 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80155 ']' 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80155 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80155 ']' 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80155 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80155 00:17:27.367 killing process with pid 80155 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80155' 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80155 00:17:27.367 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80155 00:17:27.625 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:27.625 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:27.625 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:27.625 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:27.625 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:27.625 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.625 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.625 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.625 20:53:49 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:27.625 00:17:27.625 real 0m59.259s 00:17:27.625 user 2m39.234s 00:17:27.625 sys 0m22.184s 00:17:27.625 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.625 20:53:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:27.625 ************************************ 00:17:27.625 END TEST nvmf_host_multipath 00:17:27.625 ************************************ 00:17:27.625 20:53:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:27.625 20:53:49 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:27.625 20:53:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:27.625 20:53:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.625 20:53:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:27.625 ************************************ 00:17:27.625 START TEST nvmf_timeout 00:17:27.625 ************************************ 00:17:27.625 20:53:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:27.884 * Looking for test storage... 00:17:27.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:17:27.884 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:27.885 Cannot find device "nvmf_tgt_br" 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.885 Cannot find device "nvmf_tgt_br2" 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:27.885 Cannot find device "nvmf_tgt_br" 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:27.885 Cannot find device "nvmf_tgt_br2" 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:17:27.885 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.146 20:53:49 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.146 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.146 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.146 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.146 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:28.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:28.146 00:17:28.146 --- 10.0.0.2 ping statistics --- 00:17:28.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.146 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:28.146 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:28.146 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.146 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:17:28.146 00:17:28.146 --- 10.0.0.3 ping statistics --- 00:17:28.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.146 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:28.146 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:28.405 00:17:28.405 --- 10.0.0.1 ping statistics --- 00:17:28.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.405 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81313 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81313 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81313 ']' 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.405 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:28.405 [2024-07-15 20:53:50.154350] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:17:28.405 [2024-07-15 20:53:50.154431] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.405 [2024-07-15 20:53:50.294896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:28.665 [2024-07-15 20:53:50.372793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.665 [2024-07-15 20:53:50.372848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.665 [2024-07-15 20:53:50.372858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.665 [2024-07-15 20:53:50.372865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.665 [2024-07-15 20:53:50.372872] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.665 [2024-07-15 20:53:50.373030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.665 [2024-07-15 20:53:50.373033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.665 [2024-07-15 20:53:50.414277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:29.232 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.232 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:17:29.232 20:53:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.232 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:29.232 20:53:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:29.232 20:53:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.232 20:53:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:29.232 20:53:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:29.492 [2024-07-15 20:53:51.228796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.492 20:53:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:29.751 Malloc0 00:17:29.751 20:53:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:29.751 20:53:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:30.011 20:53:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.269 [2024-07-15 20:53:52.021302] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.269 20:53:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81362 00:17:30.269 20:53:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:30.269 20:53:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81362 /var/tmp/bdevperf.sock 00:17:30.269 20:53:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81362 ']' 00:17:30.269 20:53:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.269 20:53:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.269 20:53:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.269 20:53:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.269 20:53:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:30.269 [2024-07-15 20:53:52.082459] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:17:30.269 [2024-07-15 20:53:52.082527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81362 ] 00:17:30.528 [2024-07-15 20:53:52.222218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.528 [2024-07-15 20:53:52.303822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.528 [2024-07-15 20:53:52.344475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:31.094 20:53:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.094 20:53:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:17:31.094 20:53:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:31.353 20:53:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:31.610 NVMe0n1 00:17:31.610 20:53:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81380 00:17:31.610 20:53:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:31.610 20:53:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:17:31.610 Running I/O for 10 seconds... 00:17:32.540 20:53:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.837 [2024-07-15 20:53:54.617450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.837 [2024-07-15 20:53:54.617508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.837 [2024-07-15 20:53:54.617529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.837 [2024-07-15 20:53:54.617547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.837 [2024-07-15 20:53:54.617565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x706d40 is same with the state(5) to be set 00:17:32.837 [2024-07-15 20:53:54.617626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.837 [2024-07-15 20:53:54.617637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.837 [2024-07-15 20:53:54.617661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.837 [2024-07-15 20:53:54.617679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.837 [2024-07-15 20:53:54.617698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.837 [2024-07-15 20:53:54.617716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.837 [2024-07-15 20:53:54.617734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.837 [2024-07-15 20:53:54.617752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.837 [2024-07-15 20:53:54.617771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.837 [2024-07-15 20:53:54.617789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.837 [2024-07-15 20:53:54.617807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.837 [2024-07-15 20:53:54.617826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.837 [2024-07-15 20:53:54.617844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.837 [2024-07-15 20:53:54.617864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.837 [2024-07-15 20:53:54.617874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.837 [2024-07-15 20:53:54.617882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.617892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.617900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.617910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.617919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.617928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.617937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.617946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.617954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.617964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.617972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.617982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.618813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.618985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.618995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.619003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.619023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.619042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.619068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.619086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.619105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.838 [2024-07-15 20:53:54.619124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.619143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.619161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.619186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.619206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.619225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.619244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.619263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.619281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.619300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.838 [2024-07-15 20:53:54.619320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.838 [2024-07-15 20:53:54.619330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:32.839 [2024-07-15 20:53:54.619815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.619988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.619996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.620005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.620014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.620024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.620032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.620042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.620051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.620061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.620069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.620079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.839 [2024-07-15 20:53:54.620090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.620122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:32.839 [2024-07-15 20:53:54.620129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:32.839 [2024-07-15 20:53:54.620136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108472 len:8 PRP1 0x0 PRP2 0x0 00:17:32.839 [2024-07-15 20:53:54.620144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.839 [2024-07-15 20:53:54.620197] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x757730 was disconnected and freed. reset controller. 00:17:32.839 [2024-07-15 20:53:54.620395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:32.839 [2024-07-15 20:53:54.620412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x706d40 (9): Bad file descriptor 00:17:32.839 [2024-07-15 20:53:54.620494] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:32.839 [2024-07-15 20:53:54.620508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x706d40 with addr=10.0.0.2, port=4420 00:17:32.839 [2024-07-15 20:53:54.620518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x706d40 is same with the state(5) to be set 00:17:32.839 [2024-07-15 20:53:54.620531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x706d40 (9): Bad file descriptor 00:17:32.839 [2024-07-15 20:53:54.620543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:32.839 [2024-07-15 20:53:54.620552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:32.839 [2024-07-15 20:53:54.620561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:32.839 [2024-07-15 20:53:54.620576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:32.839 [2024-07-15 20:53:54.620584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:32.839 20:53:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:17:34.737 [2024-07-15 20:53:56.617608] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:34.737 [2024-07-15 20:53:56.617676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x706d40 with addr=10.0.0.2, port=4420 00:17:34.737 [2024-07-15 20:53:56.617690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x706d40 is same with the state(5) to be set 00:17:34.737 [2024-07-15 20:53:56.617712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x706d40 (9): Bad file descriptor 00:17:34.738 [2024-07-15 20:53:56.617737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:34.738 [2024-07-15 20:53:56.617746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:34.738 [2024-07-15 20:53:56.617758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:34.738 [2024-07-15 20:53:56.617781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:34.738 [2024-07-15 20:53:56.617790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:34.738 20:53:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:17:34.738 20:53:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:34.738 20:53:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:34.994 20:53:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:17:34.994 20:53:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:17:34.994 20:53:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:34.994 20:53:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:35.251 20:53:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:17:35.251 20:53:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:17:37.207 [2024-07-15 20:53:58.614677] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:37.208 [2024-07-15 20:53:58.614730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x706d40 with addr=10.0.0.2, port=4420 00:17:37.208 [2024-07-15 20:53:58.614743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x706d40 is same with the state(5) to be set 00:17:37.208 [2024-07-15 20:53:58.614764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x706d40 (9): Bad file descriptor 00:17:37.208 [2024-07-15 20:53:58.614780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:37.208 [2024-07-15 20:53:58.614789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:37.208 [2024-07-15 20:53:58.614799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:37.208 [2024-07-15 20:53:58.614821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:37.208 [2024-07-15 20:53:58.614831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:39.106 [2024-07-15 20:54:00.611708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:39.106 [2024-07-15 20:54:00.611774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:39.106 [2024-07-15 20:54:00.611784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:39.106 [2024-07-15 20:54:00.611795] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:17:39.106 [2024-07-15 20:54:00.611817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:40.041 00:17:40.041 Latency(us) 00:17:40.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.041 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:40.041 Verification LBA range: start 0x0 length 0x4000 00:17:40.041 NVMe0n1 : 8.14 1658.88 6.48 15.72 0.00 76271.90 2750.41 7061253.96 00:17:40.041 =================================================================================================================== 00:17:40.041 Total : 1658.88 6.48 15.72 0.00 76271.90 2750.41 7061253.96 00:17:40.041 0 00:17:40.298 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:17:40.298 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:40.298 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 81380 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81362 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81362 ']' 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81362 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81362 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81362' 00:17:40.556 killing process with pid 81362 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81362 00:17:40.556 Received shutdown signal, test time was about 8.976830 seconds 00:17:40.556 00:17:40.556 Latency(us) 00:17:40.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.556 =================================================================================================================== 00:17:40.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.556 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81362 00:17:40.814 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.071 [2024-07-15 20:54:02.799074] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.071 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81496 00:17:41.071 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:41.071 20:54:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81496 /var/tmp/bdevperf.sock 00:17:41.071 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81496 ']' 00:17:41.071 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.071 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.071 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.071 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.071 20:54:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:41.071 [2024-07-15 20:54:02.866942] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:17:41.071 [2024-07-15 20:54:02.867015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81496 ] 00:17:41.329 [2024-07-15 20:54:03.006984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.329 [2024-07-15 20:54:03.103066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.329 [2024-07-15 20:54:03.144533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:41.891 20:54:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.891 20:54:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:17:41.891 20:54:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:42.148 20:54:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:17:42.405 NVMe0n1 00:17:42.405 20:54:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81520 00:17:42.405 20:54:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:17:42.405 20:54:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:42.405 Running I/O for 10 seconds... 00:17:43.335 20:54:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.594 [2024-07-15 20:54:05.318673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.594 [2024-07-15 20:54:05.318731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.594 [2024-07-15 20:54:05.318760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.594 [2024-07-15 20:54:05.318797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.594 [2024-07-15 20:54:05.318817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.318836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.318856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.318876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.318895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.318914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.318934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.318953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.318973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.318983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.318992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.319003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.319011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.319022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.319031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.319041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.319050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.319062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.319071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.319082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.319091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.319102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.319111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.319121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.594 [2024-07-15 20:54:05.319130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.319140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.594 [2024-07-15 20:54:05.319149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.319160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.594 [2024-07-15 20:54:05.319168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.319179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.594 [2024-07-15 20:54:05.319196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.594 [2024-07-15 20:54:05.319207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.595 [2024-07-15 20:54:05.319842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.319987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.319997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.320005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.595 [2024-07-15 20:54:05.320015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.595 [2024-07-15 20:54:05.320023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.596 [2024-07-15 20:54:05.320304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.596 [2024-07-15 20:54:05.320322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.596 [2024-07-15 20:54:05.320340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.596 [2024-07-15 20:54:05.320358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.596 [2024-07-15 20:54:05.320376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.596 [2024-07-15 20:54:05.320394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.596 [2024-07-15 20:54:05.320412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.596 [2024-07-15 20:54:05.320431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.596 [2024-07-15 20:54:05.320706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.596 [2024-07-15 20:54:05.320731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.320750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.597 [2024-07-15 20:54:05.320781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.597 [2024-07-15 20:54:05.320800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.597 [2024-07-15 20:54:05.320819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.597 [2024-07-15 20:54:05.320837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.597 [2024-07-15 20:54:05.320855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.597 [2024-07-15 20:54:05.320873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.597 [2024-07-15 20:54:05.320893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.597 [2024-07-15 20:54:05.320911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.320929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.320947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.320965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.320986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.320996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.321004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.321022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.321040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.321058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.321076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.321094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.321114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.321132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.321150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.321169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.597 [2024-07-15 20:54:05.321187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2422730 is same with the state(5) to be set 00:17:43.597 [2024-07-15 20:54:05.321214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:43.597 [2024-07-15 20:54:05.321221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:43.597 [2024-07-15 20:54:05.321228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104704 len:8 PRP1 0x0 PRP2 0x0 00:17:43.597 [2024-07-15 20:54:05.321237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.597 [2024-07-15 20:54:05.321282] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2422730 was disconnected and freed. reset controller. 00:17:43.597 [2024-07-15 20:54:05.321490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:43.597 [2024-07-15 20:54:05.321558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d1d40 (9): Bad file descriptor 00:17:43.597 [2024-07-15 20:54:05.321638] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:43.597 [2024-07-15 20:54:05.321652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d1d40 with addr=10.0.0.2, port=4420 00:17:43.597 [2024-07-15 20:54:05.321662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1d40 is same with the state(5) to be set 00:17:43.597 [2024-07-15 20:54:05.321675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d1d40 (9): Bad file descriptor 00:17:43.597 [2024-07-15 20:54:05.321688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:43.597 [2024-07-15 20:54:05.321696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:43.597 [2024-07-15 20:54:05.321706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:43.597 [2024-07-15 20:54:05.321721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:43.597 [2024-07-15 20:54:05.321730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:43.597 20:54:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:17:44.526 [2024-07-15 20:54:06.320234] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:44.526 [2024-07-15 20:54:06.320298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d1d40 with addr=10.0.0.2, port=4420 00:17:44.526 [2024-07-15 20:54:06.320311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1d40 is same with the state(5) to be set 00:17:44.526 [2024-07-15 20:54:06.320331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d1d40 (9): Bad file descriptor 00:17:44.526 [2024-07-15 20:54:06.320348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:44.526 [2024-07-15 20:54:06.320357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:44.526 [2024-07-15 20:54:06.320368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:44.526 [2024-07-15 20:54:06.320388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:44.526 [2024-07-15 20:54:06.320397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:44.526 20:54:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.783 [2024-07-15 20:54:06.519816] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.783 20:54:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 81520 00:17:45.728 [2024-07-15 20:54:07.337527] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:53.830 00:17:53.830 Latency(us) 00:17:53.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.830 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.830 Verification LBA range: start 0x0 length 0x4000 00:17:53.830 NVMe0n1 : 10.01 8527.18 33.31 0.00 0.00 14987.74 993.57 3018551.31 00:17:53.830 =================================================================================================================== 00:17:53.830 Total : 8527.18 33.31 0.00 0.00 14987.74 993.57 3018551.31 00:17:53.830 0 00:17:53.830 20:54:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81629 00:17:53.830 20:54:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:53.830 20:54:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:17:53.830 Running I/O for 10 seconds... 00:17:53.830 20:54:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.830 [2024-07-15 20:54:15.399981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.830 [2024-07-15 20:54:15.400337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.830 [2024-07-15 20:54:15.400356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.830 [2024-07-15 20:54:15.400376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.830 [2024-07-15 20:54:15.400395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.830 [2024-07-15 20:54:15.400413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.830 [2024-07-15 20:54:15.400432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.830 [2024-07-15 20:54:15.400450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.830 [2024-07-15 20:54:15.400460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.830 [2024-07-15 20:54:15.400469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.831 [2024-07-15 20:54:15.400487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.831 [2024-07-15 20:54:15.400804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.831 [2024-07-15 20:54:15.400824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.831 [2024-07-15 20:54:15.400842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.831 [2024-07-15 20:54:15.400860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.831 [2024-07-15 20:54:15.400878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.831 [2024-07-15 20:54:15.400895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.831 [2024-07-15 20:54:15.400913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.831 [2024-07-15 20:54:15.400932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.831 [2024-07-15 20:54:15.400978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.831 [2024-07-15 20:54:15.400986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.400995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.832 [2024-07-15 20:54:15.401309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.832 [2024-07-15 20:54:15.401327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.832 [2024-07-15 20:54:15.401345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.832 [2024-07-15 20:54:15.401363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.832 [2024-07-15 20:54:15.401382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.832 [2024-07-15 20:54:15.401400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.832 [2024-07-15 20:54:15.401418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.832 [2024-07-15 20:54:15.401436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.832 [2024-07-15 20:54:15.401454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.832 [2024-07-15 20:54:15.401472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.832 [2024-07-15 20:54:15.401490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.832 [2024-07-15 20:54:15.401500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.833 [2024-07-15 20:54:15.401862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.401979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.833 [2024-07-15 20:54:15.401988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.833 [2024-07-15 20:54:15.402000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244f320 is same with the state(5) to be set 00:17:53.833 [2024-07-15 20:54:15.402011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.833 [2024-07-15 20:54:15.402018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.833 [2024-07-15 20:54:15.402025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0 00:17:53.833 [2024-07-15 20:54:15.402042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.834 [2024-07-15 20:54:15.402533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:17:53.834 [2024-07-15 20:54:15.402541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.834 [2024-07-15 20:54:15.402550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.834 [2024-07-15 20:54:15.402556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.835 [2024-07-15 20:54:15.402563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:17:53.835 [2024-07-15 20:54:15.402571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.835 [2024-07-15 20:54:15.402580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.835 [2024-07-15 20:54:15.402586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.835 [2024-07-15 20:54:15.402595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:17:53.835 20:54:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:17:53.835 [2024-07-15 20:54:15.420891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.835 [2024-07-15 20:54:15.420929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.835 [2024-07-15 20:54:15.420939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.835 [2024-07-15 20:54:15.420948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:17:53.835 [2024-07-15 20:54:15.420957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.835 [2024-07-15 20:54:15.420966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.835 [2024-07-15 20:54:15.420973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.835 [2024-07-15 20:54:15.420981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:17:53.835 [2024-07-15 20:54:15.420989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.835 [2024-07-15 20:54:15.420998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.835 [2024-07-15 20:54:15.421004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.835 [2024-07-15 20:54:15.421012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:17:53.835 [2024-07-15 20:54:15.421020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.835 [2024-07-15 20:54:15.421029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.835 [2024-07-15 20:54:15.421036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.835 [2024-07-15 20:54:15.421043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96120 len:8 PRP1 0x0 PRP2 0x0 00:17:53.835 [2024-07-15 20:54:15.421051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.835 [2024-07-15 20:54:15.421103] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x244f320 was disconnected and freed. reset controller. 00:17:53.835 [2024-07-15 20:54:15.421209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.835 [2024-07-15 20:54:15.421221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.835 [2024-07-15 20:54:15.421232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.835 [2024-07-15 20:54:15.421241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.835 [2024-07-15 20:54:15.421250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.835 [2024-07-15 20:54:15.421258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.835 [2024-07-15 20:54:15.421266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.835 [2024-07-15 20:54:15.421274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.835 [2024-07-15 20:54:15.421283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1d40 is same with the state(5) to be set 00:17:53.835 [2024-07-15 20:54:15.421457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:53.835 [2024-07-15 20:54:15.421473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d1d40 (9): Bad file descriptor 00:17:53.835 [2024-07-15 20:54:15.421551] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:53.835 [2024-07-15 20:54:15.421564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d1d40 with addr=10.0.0.2, port=4420 00:17:53.835 [2024-07-15 20:54:15.421574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1d40 is same with the state(5) to be set 00:17:53.835 [2024-07-15 20:54:15.421588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d1d40 (9): Bad file descriptor 00:17:53.835 [2024-07-15 20:54:15.421601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:53.835 [2024-07-15 20:54:15.421609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:53.835 [2024-07-15 20:54:15.421618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:53.835 [2024-07-15 20:54:15.421633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:53.835 [2024-07-15 20:54:15.421642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:54.794 [2024-07-15 20:54:16.420123] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:54.794 [2024-07-15 20:54:16.420192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d1d40 with addr=10.0.0.2, port=4420 00:17:54.794 [2024-07-15 20:54:16.420206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1d40 is same with the state(5) to be set 00:17:54.794 [2024-07-15 20:54:16.420227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d1d40 (9): Bad file descriptor 00:17:54.794 [2024-07-15 20:54:16.420241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:54.794 [2024-07-15 20:54:16.420250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:54.794 [2024-07-15 20:54:16.420261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:54.794 [2024-07-15 20:54:16.420281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:54.794 [2024-07-15 20:54:16.420290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.733 [2024-07-15 20:54:17.418778] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:55.733 [2024-07-15 20:54:17.418836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d1d40 with addr=10.0.0.2, port=4420 00:17:55.733 [2024-07-15 20:54:17.418848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1d40 is same with the state(5) to be set 00:17:55.733 [2024-07-15 20:54:17.418867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d1d40 (9): Bad file descriptor 00:17:55.733 [2024-07-15 20:54:17.418880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:55.733 [2024-07-15 20:54:17.418889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:55.733 [2024-07-15 20:54:17.418899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:55.733 [2024-07-15 20:54:17.418916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:55.733 [2024-07-15 20:54:17.418924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:56.670 20:54:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.670 [2024-07-15 20:54:18.419898] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:56.670 [2024-07-15 20:54:18.419938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d1d40 with addr=10.0.0.2, port=4420 00:17:56.670 [2024-07-15 20:54:18.419949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d1d40 is same with the state(5) to be set 00:17:56.670 [2024-07-15 20:54:18.420126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d1d40 (9): Bad file descriptor 00:17:56.670 [2024-07-15 20:54:18.420310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:56.670 [2024-07-15 20:54:18.420321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:56.670 [2024-07-15 20:54:18.420330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:56.670 [2024-07-15 20:54:18.423036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:56.670 [2024-07-15 20:54:18.423066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:56.930 [2024-07-15 20:54:18.592560] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.930 20:54:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 81629 00:17:57.923 [2024-07-15 20:54:19.451570] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:03.207 00:18:03.207 Latency(us) 00:18:03.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.207 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:03.207 Verification LBA range: start 0x0 length 0x4000 00:18:03.207 NVMe0n1 : 10.01 7266.66 28.39 5192.77 0.00 10251.28 473.75 3032026.99 00:18:03.207 =================================================================================================================== 00:18:03.207 Total : 7266.66 28.39 5192.77 0.00 10251.28 0.00 3032026.99 00:18:03.207 0 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81496 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81496 ']' 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81496 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81496 00:18:03.207 killing process with pid 81496 00:18:03.207 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.207 00:18:03.207 Latency(us) 00:18:03.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.207 =================================================================================================================== 00:18:03.207 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81496' 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81496 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81496 00:18:03.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81744 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81744 /var/tmp/bdevperf.sock 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81744 ']' 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.207 20:54:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:03.207 [2024-07-15 20:54:24.607624] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:18:03.207 [2024-07-15 20:54:24.607694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81744 ] 00:18:03.207 [2024-07-15 20:54:24.742544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.207 [2024-07-15 20:54:24.834425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.207 [2024-07-15 20:54:24.875482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:03.835 20:54:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.835 20:54:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:03.835 20:54:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81744 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:03.835 20:54:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81754 00:18:03.835 20:54:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:03.835 20:54:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:04.113 NVMe0n1 00:18:04.113 20:54:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81796 00:18:04.113 20:54:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:04.113 20:54:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:18:04.372 Running I/O for 10 seconds... 00:18:05.311 20:54:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.311 [2024-07-15 20:54:27.144521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.144992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.311 [2024-07-15 20:54:27.145206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f1670 is same with the state(5) to be set 00:18:05.312 [2024-07-15 20:54:27.145632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.145986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.145996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.146004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.146014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.312 [2024-07-15 20:54:27.146023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.312 [2024-07-15 20:54:27.146044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.313 [2024-07-15 20:54:27.146813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.313 [2024-07-15 20:54:27.146823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.146831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.146841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.146849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.146859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.146869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.146879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.146887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.146897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.146905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.146915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.146926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.146936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.146945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.146955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.146963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.146973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.146981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.146991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.146999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.314 [2024-07-15 20:54:27.147617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.314 [2024-07-15 20:54:27.147627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.147989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.147997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.148007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.315 [2024-07-15 20:54:27.148015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.148024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dd330 is same with the state(5) to be set 00:18:05.315 [2024-07-15 20:54:27.148036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:05.315 [2024-07-15 20:54:27.148043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:05.315 [2024-07-15 20:54:27.148050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86696 len:8 PRP1 0x0 PRP2 0x0 00:18:05.315 [2024-07-15 20:54:27.148058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.148104] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8dd330 was disconnected and freed. reset controller. 00:18:05.315 [2024-07-15 20:54:27.148212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.315 [2024-07-15 20:54:27.148227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.148238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.315 [2024-07-15 20:54:27.148249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.148259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.315 [2024-07-15 20:54:27.148267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.148276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.315 [2024-07-15 20:54:27.148284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.315 [2024-07-15 20:54:27.148292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x894c00 is same with the state(5) to be set 00:18:05.315 [2024-07-15 20:54:27.148517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:05.315 [2024-07-15 20:54:27.148537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x894c00 (9): Bad file descriptor 00:18:05.315 [2024-07-15 20:54:27.148622] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:05.315 [2024-07-15 20:54:27.148636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x894c00 with addr=10.0.0.2, port=4420 00:18:05.315 [2024-07-15 20:54:27.148645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x894c00 is same with the state(5) to be set 00:18:05.315 [2024-07-15 20:54:27.148659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x894c00 (9): Bad file descriptor 00:18:05.315 [2024-07-15 20:54:27.148672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:05.315 [2024-07-15 20:54:27.148680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:05.315 [2024-07-15 20:54:27.148690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:05.315 [2024-07-15 20:54:27.148706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:05.315 [2024-07-15 20:54:27.170568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:05.315 20:54:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 81796 00:18:07.962 [2024-07-15 20:54:29.167533] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.962 [2024-07-15 20:54:29.167597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x894c00 with addr=10.0.0.2, port=4420 00:18:07.962 [2024-07-15 20:54:29.167612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x894c00 is same with the state(5) to be set 00:18:07.962 [2024-07-15 20:54:29.167637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x894c00 (9): Bad file descriptor 00:18:07.962 [2024-07-15 20:54:29.167652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:07.962 [2024-07-15 20:54:29.167661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:07.962 [2024-07-15 20:54:29.167673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:07.962 [2024-07-15 20:54:29.167696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:07.962 [2024-07-15 20:54:29.167705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:09.339 [2024-07-15 20:54:31.164621] uring.c: 610:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:09.339 [2024-07-15 20:54:31.164683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x894c00 with addr=10.0.0.2, port=4420 00:18:09.339 [2024-07-15 20:54:31.164698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x894c00 is same with the state(5) to be set 00:18:09.339 [2024-07-15 20:54:31.164721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x894c00 (9): Bad file descriptor 00:18:09.339 [2024-07-15 20:54:31.164737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:09.339 [2024-07-15 20:54:31.164746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:09.339 [2024-07-15 20:54:31.164758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:09.339 [2024-07-15 20:54:31.164780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:09.339 [2024-07-15 20:54:31.164788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:11.871 [2024-07-15 20:54:33.161601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:11.871 [2024-07-15 20:54:33.161644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:11.871 [2024-07-15 20:54:33.161653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:11.871 [2024-07-15 20:54:33.161664] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:11.871 [2024-07-15 20:54:33.161683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.437 00:18:12.437 Latency(us) 00:18:12.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.437 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:12.437 NVMe0n1 : 8.10 2650.40 10.35 15.80 0.00 48135.95 6369.36 7061253.96 00:18:12.437 =================================================================================================================== 00:18:12.437 Total : 2650.40 10.35 15.80 0.00 48135.95 6369.36 7061253.96 00:18:12.437 0 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:12.437 Attaching 5 probes... 00:18:12.437 1141.813427: reset bdev controller NVMe0 00:18:12.437 1141.870771: reconnect bdev controller NVMe0 00:18:12.437 3160.716672: reconnect delay bdev controller NVMe0 00:18:12.437 3160.736284: reconnect bdev controller NVMe0 00:18:12.437 5157.805432: reconnect delay bdev controller NVMe0 00:18:12.437 5157.827872: reconnect bdev controller NVMe0 00:18:12.437 7154.887693: reconnect delay bdev controller NVMe0 00:18:12.437 7154.903925: reconnect bdev controller NVMe0 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 81754 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81744 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81744 ']' 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81744 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81744 00:18:12.437 killing process with pid 81744 00:18:12.437 Received shutdown signal, test time was about 8.189725 seconds 00:18:12.437 00:18:12.437 Latency(us) 00:18:12.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.437 =================================================================================================================== 00:18:12.437 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81744' 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81744 00:18:12.437 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81744 00:18:12.754 20:54:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.754 20:54:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:12.754 20:54:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:18:12.754 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.754 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:18:12.754 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.754 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:18:12.754 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.754 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.026 rmmod nvme_tcp 00:18:13.026 rmmod nvme_fabrics 00:18:13.026 rmmod nvme_keyring 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81313 ']' 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81313 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81313 ']' 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81313 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81313 00:18:13.026 killing process with pid 81313 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81313' 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81313 00:18:13.026 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81313 00:18:13.284 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.284 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.284 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.284 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.284 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.284 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.284 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.284 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.284 20:54:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:13.284 ************************************ 00:18:13.284 END TEST nvmf_timeout 00:18:13.284 ************************************ 00:18:13.284 00:18:13.284 real 0m45.473s 00:18:13.284 user 2m11.333s 00:18:13.284 sys 0m6.544s 00:18:13.284 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:13.284 20:54:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:13.284 20:54:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:13.284 20:54:35 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:18:13.284 20:54:35 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:18:13.284 20:54:35 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:13.284 20:54:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:13.284 20:54:35 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:18:13.284 ************************************ 00:18:13.284 END TEST nvmf_tcp 00:18:13.284 ************************************ 00:18:13.284 00:18:13.284 real 11m4.587s 00:18:13.284 user 26m3.287s 00:18:13.284 sys 3m19.965s 00:18:13.284 20:54:35 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:13.284 20:54:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:13.284 20:54:35 -- common/autotest_common.sh@1142 -- # return 0 00:18:13.284 20:54:35 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:18:13.284 20:54:35 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:13.284 20:54:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:13.284 20:54:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.284 20:54:35 -- common/autotest_common.sh@10 -- # set +x 00:18:13.284 ************************************ 00:18:13.284 START TEST nvmf_dif 00:18:13.284 ************************************ 00:18:13.284 20:54:35 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:13.542 * Looking for test storage... 00:18:13.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:13.542 20:54:35 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.542 20:54:35 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.542 20:54:35 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.542 20:54:35 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.542 20:54:35 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.542 20:54:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.543 20:54:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.543 20:54:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.543 20:54:35 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:18:13.543 20:54:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.543 20:54:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:18:13.543 20:54:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:13.543 20:54:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:13.543 20:54:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:18:13.543 20:54:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.543 20:54:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:13.543 20:54:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:13.543 Cannot find device "nvmf_tgt_br" 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@155 -- # true 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.543 Cannot find device "nvmf_tgt_br2" 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@156 -- # true 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:13.543 Cannot find device "nvmf_tgt_br" 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@158 -- # true 00:18:13.543 20:54:35 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:13.801 Cannot find device "nvmf_tgt_br2" 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@159 -- # true 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@162 -- # true 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@163 -- # true 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:13.801 20:54:35 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.059 20:54:35 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.059 20:54:35 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.059 20:54:35 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.059 20:54:35 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.059 20:54:35 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:14.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:18:14.059 00:18:14.059 --- 10.0.0.2 ping statistics --- 00:18:14.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.059 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:18:14.059 20:54:35 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:14.059 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.059 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:18:14.059 00:18:14.059 --- 10.0.0.3 ping statistics --- 00:18:14.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.059 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:14.059 20:54:35 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:14.059 00:18:14.059 --- 10.0.0.1 ping statistics --- 00:18:14.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.059 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:14.059 20:54:35 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.059 20:54:35 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:18:14.059 20:54:35 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:18:14.059 20:54:35 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:14.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:14.623 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.623 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:14.623 20:54:36 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.623 20:54:36 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:14.623 20:54:36 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:14.623 20:54:36 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.623 20:54:36 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:14.623 20:54:36 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:14.623 20:54:36 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:14.623 20:54:36 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:18:14.623 20:54:36 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.623 20:54:36 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.623 20:54:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:14.623 20:54:36 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82232 00:18:14.623 20:54:36 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82232 00:18:14.623 20:54:36 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:14.623 20:54:36 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 82232 ']' 00:18:14.623 20:54:36 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.623 20:54:36 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.623 20:54:36 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.623 20:54:36 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.623 20:54:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:14.623 [2024-07-15 20:54:36.434539] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:18:14.623 [2024-07-15 20:54:36.434612] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.881 [2024-07-15 20:54:36.578155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.881 [2024-07-15 20:54:36.672086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.881 [2024-07-15 20:54:36.672133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.881 [2024-07-15 20:54:36.672143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.881 [2024-07-15 20:54:36.672151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.881 [2024-07-15 20:54:36.672158] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.881 [2024-07-15 20:54:36.672195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.881 [2024-07-15 20:54:36.713200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:15.445 20:54:37 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.445 20:54:37 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:18:15.445 20:54:37 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.445 20:54:37 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.445 20:54:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:15.445 20:54:37 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.446 20:54:37 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:18:15.446 20:54:37 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:15.446 20:54:37 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.446 20:54:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:15.446 [2024-07-15 20:54:37.319031] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.446 20:54:37 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.446 20:54:37 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:15.446 20:54:37 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:15.446 20:54:37 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.446 20:54:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:15.446 ************************************ 00:18:15.446 START TEST fio_dif_1_default 00:18:15.446 ************************************ 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:15.446 bdev_null0 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.446 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:15.703 [2024-07-15 20:54:37.379248] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:15.703 20:54:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:15.703 { 00:18:15.703 "params": { 00:18:15.704 "name": "Nvme$subsystem", 00:18:15.704 "trtype": "$TEST_TRANSPORT", 00:18:15.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.704 "adrfam": "ipv4", 00:18:15.704 "trsvcid": "$NVMF_PORT", 00:18:15.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.704 "hdgst": ${hdgst:-false}, 00:18:15.704 "ddgst": ${ddgst:-false} 00:18:15.704 }, 00:18:15.704 "method": "bdev_nvme_attach_controller" 00:18:15.704 } 00:18:15.704 EOF 00:18:15.704 )") 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:15.704 "params": { 00:18:15.704 "name": "Nvme0", 00:18:15.704 "trtype": "tcp", 00:18:15.704 "traddr": "10.0.0.2", 00:18:15.704 "adrfam": "ipv4", 00:18:15.704 "trsvcid": "4420", 00:18:15.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:15.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:15.704 "hdgst": false, 00:18:15.704 "ddgst": false 00:18:15.704 }, 00:18:15.704 "method": "bdev_nvme_attach_controller" 00:18:15.704 }' 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:15.704 20:54:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:15.962 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:15.962 fio-3.35 00:18:15.962 Starting 1 thread 00:18:28.199 00:18:28.199 filename0: (groupid=0, jobs=1): err= 0: pid=82304: Mon Jul 15 20:54:48 2024 00:18:28.199 read: IOPS=12.2k, BW=47.6MiB/s (49.9MB/s)(476MiB/10001msec) 00:18:28.199 slat (usec): min=5, max=163, avg= 6.05, stdev= 1.43 00:18:28.199 clat (usec): min=289, max=2245, avg=311.82, stdev=19.45 00:18:28.199 lat (usec): min=294, max=2280, avg=317.87, stdev=19.72 00:18:28.199 clat percentiles (usec): 00:18:28.199 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 297], 20.00th=[ 302], 00:18:28.199 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 310], 60.00th=[ 314], 00:18:28.199 | 70.00th=[ 318], 80.00th=[ 322], 90.00th=[ 326], 95.00th=[ 330], 00:18:28.199 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 482], 99.95th=[ 529], 00:18:28.199 | 99.99th=[ 857] 00:18:28.199 bw ( KiB/s): min=48095, max=49024, per=100.00%, avg=48761.11, stdev=215.48, samples=19 00:18:28.199 iops : min=12023, max=12256, avg=12190.21, stdev=53.99, samples=19 00:18:28.199 lat (usec) : 500=99.92%, 750=0.06%, 1000=0.01% 00:18:28.199 lat (msec) : 2=0.01%, 4=0.01% 00:18:28.199 cpu : usr=81.85%, sys=16.53%, ctx=95, majf=0, minf=0 00:18:28.199 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.199 issued rwts: total=121804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.199 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:28.199 00:18:28.199 Run status group 0 (all jobs): 00:18:28.199 READ: bw=47.6MiB/s (49.9MB/s), 47.6MiB/s-47.6MiB/s (49.9MB/s-49.9MB/s), io=476MiB (499MB), run=10001-10001msec 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:28.199 ************************************ 00:18:28.199 END TEST fio_dif_1_default 00:18:28.199 ************************************ 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.199 00:18:28.199 real 0m10.977s 00:18:28.199 user 0m8.781s 00:18:28.199 sys 0m1.976s 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:28.199 20:54:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:28.199 20:54:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:18:28.199 20:54:48 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:28.199 20:54:48 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:28.199 20:54:48 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:28.199 20:54:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:28.199 ************************************ 00:18:28.200 START TEST fio_dif_1_multi_subsystems 00:18:28.200 ************************************ 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.200 bdev_null0 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.200 [2024-07-15 20:54:48.426274] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.200 bdev_null1 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:28.200 { 00:18:28.200 "params": { 00:18:28.200 "name": "Nvme$subsystem", 00:18:28.200 "trtype": "$TEST_TRANSPORT", 00:18:28.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.200 "adrfam": "ipv4", 00:18:28.200 "trsvcid": "$NVMF_PORT", 00:18:28.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.200 "hdgst": ${hdgst:-false}, 00:18:28.200 "ddgst": ${ddgst:-false} 00:18:28.200 }, 00:18:28.200 "method": "bdev_nvme_attach_controller" 00:18:28.200 } 00:18:28.200 EOF 00:18:28.200 )") 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:28.200 { 00:18:28.200 "params": { 00:18:28.200 "name": "Nvme$subsystem", 00:18:28.200 "trtype": "$TEST_TRANSPORT", 00:18:28.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.200 "adrfam": "ipv4", 00:18:28.200 "trsvcid": "$NVMF_PORT", 00:18:28.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.200 "hdgst": ${hdgst:-false}, 00:18:28.200 "ddgst": ${ddgst:-false} 00:18:28.200 }, 00:18:28.200 "method": "bdev_nvme_attach_controller" 00:18:28.200 } 00:18:28.200 EOF 00:18:28.200 )") 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:28.200 "params": { 00:18:28.200 "name": "Nvme0", 00:18:28.200 "trtype": "tcp", 00:18:28.200 "traddr": "10.0.0.2", 00:18:28.200 "adrfam": "ipv4", 00:18:28.200 "trsvcid": "4420", 00:18:28.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:28.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:28.200 "hdgst": false, 00:18:28.200 "ddgst": false 00:18:28.200 }, 00:18:28.200 "method": "bdev_nvme_attach_controller" 00:18:28.200 },{ 00:18:28.200 "params": { 00:18:28.200 "name": "Nvme1", 00:18:28.200 "trtype": "tcp", 00:18:28.200 "traddr": "10.0.0.2", 00:18:28.200 "adrfam": "ipv4", 00:18:28.200 "trsvcid": "4420", 00:18:28.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.200 "hdgst": false, 00:18:28.200 "ddgst": false 00:18:28.200 }, 00:18:28.200 "method": "bdev_nvme_attach_controller" 00:18:28.200 }' 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:28.200 20:54:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.200 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:28.200 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:28.200 fio-3.35 00:18:28.200 Starting 2 threads 00:18:38.181 00:18:38.181 filename0: (groupid=0, jobs=1): err= 0: pid=82458: Mon Jul 15 20:54:59 2024 00:18:38.181 read: IOPS=6452, BW=25.2MiB/s (26.4MB/s)(252MiB/10001msec) 00:18:38.181 slat (nsec): min=5828, max=77942, avg=10928.36, stdev=2878.50 00:18:38.181 clat (usec): min=518, max=1052, avg=591.03, stdev=20.15 00:18:38.181 lat (usec): min=524, max=1087, avg=601.95, stdev=20.28 00:18:38.181 clat percentiles (usec): 00:18:38.181 | 1.00th=[ 553], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 578], 00:18:38.181 | 30.00th=[ 578], 40.00th=[ 586], 50.00th=[ 594], 60.00th=[ 594], 00:18:38.181 | 70.00th=[ 603], 80.00th=[ 611], 90.00th=[ 619], 95.00th=[ 627], 00:18:38.181 | 99.00th=[ 635], 99.50th=[ 644], 99.90th=[ 685], 99.95th=[ 717], 00:18:38.181 | 99.99th=[ 816] 00:18:38.181 bw ( KiB/s): min=25728, max=25920, per=50.05%, avg=25839.79, stdev=46.04, samples=19 00:18:38.181 iops : min= 6432, max= 6480, avg=6459.95, stdev=11.51, samples=19 00:18:38.181 lat (usec) : 750=99.98%, 1000=0.01% 00:18:38.181 lat (msec) : 2=0.01% 00:18:38.181 cpu : usr=89.26%, sys=9.80%, ctx=23, majf=0, minf=0 00:18:38.181 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:38.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.181 issued rwts: total=64536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.181 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:38.181 filename1: (groupid=0, jobs=1): err= 0: pid=82459: Mon Jul 15 20:54:59 2024 00:18:38.181 read: IOPS=6453, BW=25.2MiB/s (26.4MB/s)(252MiB/10001msec) 00:18:38.181 slat (nsec): min=5831, max=76505, avg=10680.27, stdev=2588.84 00:18:38.181 clat (usec): min=308, max=1085, avg=591.99, stdev=24.28 00:18:38.181 lat (usec): min=313, max=1118, avg=602.67, stdev=24.98 00:18:38.181 clat percentiles (usec): 00:18:38.181 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 562], 20.00th=[ 578], 00:18:38.181 | 30.00th=[ 586], 40.00th=[ 586], 50.00th=[ 594], 60.00th=[ 603], 00:18:38.181 | 70.00th=[ 603], 80.00th=[ 611], 90.00th=[ 619], 95.00th=[ 627], 00:18:38.181 | 99.00th=[ 635], 99.50th=[ 644], 99.90th=[ 693], 99.95th=[ 709], 00:18:38.181 | 99.99th=[ 750] 00:18:38.181 bw ( KiB/s): min=25728, max=25920, per=50.05%, avg=25839.79, stdev=46.04, samples=19 00:18:38.181 iops : min= 6432, max= 6480, avg=6459.95, stdev=11.51, samples=19 00:18:38.181 lat (usec) : 500=0.01%, 750=99.98%, 1000=0.01% 00:18:38.181 lat (msec) : 2=0.01% 00:18:38.181 cpu : usr=88.46%, sys=10.58%, ctx=76, majf=0, minf=0 00:18:38.181 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:38.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.181 issued rwts: total=64540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.181 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:38.181 00:18:38.181 Run status group 0 (all jobs): 00:18:38.181 READ: bw=50.4MiB/s (52.9MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=504MiB (529MB), run=10001-10001msec 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:38.181 ************************************ 00:18:38.181 END TEST fio_dif_1_multi_subsystems 00:18:38.181 ************************************ 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.181 00:18:38.181 real 0m11.108s 00:18:38.181 user 0m18.479s 00:18:38.181 sys 0m2.366s 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:38.181 20:54:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:38.181 20:54:59 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:18:38.181 20:54:59 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:18:38.181 20:54:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:38.181 20:54:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.181 20:54:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:38.181 ************************************ 00:18:38.181 START TEST fio_dif_rand_params 00:18:38.181 ************************************ 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:38.181 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:38.182 bdev_null0 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:38.182 [2024-07-15 20:54:59.613261] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.182 { 00:18:38.182 "params": { 00:18:38.182 "name": "Nvme$subsystem", 00:18:38.182 "trtype": "$TEST_TRANSPORT", 00:18:38.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.182 "adrfam": "ipv4", 00:18:38.182 "trsvcid": "$NVMF_PORT", 00:18:38.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.182 "hdgst": ${hdgst:-false}, 00:18:38.182 "ddgst": ${ddgst:-false} 00:18:38.182 }, 00:18:38.182 "method": "bdev_nvme_attach_controller" 00:18:38.182 } 00:18:38.182 EOF 00:18:38.182 )") 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:38.182 "params": { 00:18:38.182 "name": "Nvme0", 00:18:38.182 "trtype": "tcp", 00:18:38.182 "traddr": "10.0.0.2", 00:18:38.182 "adrfam": "ipv4", 00:18:38.182 "trsvcid": "4420", 00:18:38.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:38.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:38.182 "hdgst": false, 00:18:38.182 "ddgst": false 00:18:38.182 }, 00:18:38.182 "method": "bdev_nvme_attach_controller" 00:18:38.182 }' 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:38.182 20:54:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:38.182 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:38.182 ... 00:18:38.182 fio-3.35 00:18:38.182 Starting 3 threads 00:18:43.448 00:18:43.448 filename0: (groupid=0, jobs=1): err= 0: pid=82620: Mon Jul 15 20:55:05 2024 00:18:43.448 read: IOPS=334, BW=41.9MiB/s (43.9MB/s)(210MiB/5007msec) 00:18:43.448 slat (nsec): min=5964, max=33519, avg=9013.56, stdev=3387.85 00:18:43.448 clat (usec): min=8847, max=9238, avg=8933.63, stdev=27.11 00:18:43.448 lat (usec): min=8854, max=9268, avg=8942.64, stdev=27.98 00:18:43.448 clat percentiles (usec): 00:18:43.448 | 1.00th=[ 8848], 5.00th=[ 8848], 10.00th=[ 8848], 20.00th=[ 8979], 00:18:43.448 | 30.00th=[ 8979], 40.00th=[ 8979], 50.00th=[ 8979], 60.00th=[ 8979], 00:18:43.449 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 8979], 95.00th=[ 8979], 00:18:43.449 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[ 9241], 99.95th=[ 9241], 00:18:43.449 | 99.99th=[ 9241] 00:18:43.449 bw ( KiB/s): min=42240, max=43008, per=33.37%, avg=42922.67, stdev=256.00, samples=9 00:18:43.449 iops : min= 330, max= 336, avg=335.33, stdev= 2.00, samples=9 00:18:43.449 lat (msec) : 10=100.00% 00:18:43.449 cpu : usr=88.53%, sys=10.79%, ctx=11, majf=0, minf=9 00:18:43.449 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.449 issued rwts: total=1677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.449 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:43.449 filename0: (groupid=0, jobs=1): err= 0: pid=82621: Mon Jul 15 20:55:05 2024 00:18:43.449 read: IOPS=335, BW=41.9MiB/s (43.9MB/s)(210MiB/5005msec) 00:18:43.449 slat (nsec): min=6023, max=29932, avg=8559.63, stdev=2948.12 00:18:43.449 clat (usec): min=5758, max=11111, avg=8932.45, stdev=163.64 00:18:43.449 lat (usec): min=5765, max=11141, avg=8941.01, stdev=163.93 00:18:43.449 clat percentiles (usec): 00:18:43.449 | 1.00th=[ 8848], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 8979], 00:18:43.449 | 30.00th=[ 8979], 40.00th=[ 8979], 50.00th=[ 8979], 60.00th=[ 8979], 00:18:43.449 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 8979], 95.00th=[ 8979], 00:18:43.449 | 99.00th=[ 8979], 99.50th=[ 8979], 99.90th=[11076], 99.95th=[11076], 00:18:43.449 | 99.99th=[11076] 00:18:43.449 bw ( KiB/s): min=42240, max=43008, per=33.31%, avg=42846.67, stdev=320.82, samples=9 00:18:43.449 iops : min= 330, max= 336, avg=334.67, stdev= 2.65, samples=9 00:18:43.449 lat (msec) : 10=99.82%, 20=0.18% 00:18:43.449 cpu : usr=90.73%, sys=8.85%, ctx=14, majf=0, minf=9 00:18:43.449 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.449 issued rwts: total=1677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.449 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:43.449 filename0: (groupid=0, jobs=1): err= 0: pid=82622: Mon Jul 15 20:55:05 2024 00:18:43.449 read: IOPS=334, BW=41.9MiB/s (43.9MB/s)(210MiB/5006msec) 00:18:43.449 slat (nsec): min=6029, max=28704, avg=8251.21, stdev=2388.51 00:18:43.449 clat (usec): min=7269, max=10003, avg=8934.53, stdev=85.66 00:18:43.449 lat (usec): min=7275, max=10032, avg=8942.78, stdev=86.03 00:18:43.449 clat percentiles (usec): 00:18:43.449 | 1.00th=[ 8848], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 8979], 00:18:43.449 | 30.00th=[ 8979], 40.00th=[ 8979], 50.00th=[ 8979], 60.00th=[ 8979], 00:18:43.449 | 70.00th=[ 8979], 80.00th=[ 8979], 90.00th=[ 8979], 95.00th=[ 8979], 00:18:43.449 | 99.00th=[ 8979], 99.50th=[ 8979], 99.90th=[10028], 99.95th=[10028], 00:18:43.449 | 99.99th=[10028] 00:18:43.449 bw ( KiB/s): min=42240, max=43008, per=33.37%, avg=42922.67, stdev=256.00, samples=9 00:18:43.449 iops : min= 330, max= 336, avg=335.33, stdev= 2.00, samples=9 00:18:43.449 lat (msec) : 10=99.94%, 20=0.06% 00:18:43.449 cpu : usr=89.81%, sys=9.77%, ctx=10, majf=0, minf=9 00:18:43.449 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:43.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.449 issued rwts: total=1677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.449 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:43.449 00:18:43.449 Run status group 0 (all jobs): 00:18:43.449 READ: bw=126MiB/s (132MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-43.9MB/s), io=629MiB (659MB), run=5005-5007msec 00:18:43.707 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.708 bdev_null0 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.708 [2024-07-15 20:55:05.582404] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.708 bdev_null1 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.708 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.967 bdev_null2 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:43.967 { 00:18:43.967 "params": { 00:18:43.967 "name": "Nvme$subsystem", 00:18:43.967 "trtype": "$TEST_TRANSPORT", 00:18:43.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:43.967 "adrfam": "ipv4", 00:18:43.967 "trsvcid": "$NVMF_PORT", 00:18:43.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:43.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:43.967 "hdgst": ${hdgst:-false}, 00:18:43.967 "ddgst": ${ddgst:-false} 00:18:43.967 }, 00:18:43.967 "method": "bdev_nvme_attach_controller" 00:18:43.967 } 00:18:43.967 EOF 00:18:43.967 )") 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:43.967 { 00:18:43.967 "params": { 00:18:43.967 "name": "Nvme$subsystem", 00:18:43.967 "trtype": "$TEST_TRANSPORT", 00:18:43.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:43.967 "adrfam": "ipv4", 00:18:43.967 "trsvcid": "$NVMF_PORT", 00:18:43.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:43.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:43.967 "hdgst": ${hdgst:-false}, 00:18:43.967 "ddgst": ${ddgst:-false} 00:18:43.967 }, 00:18:43.967 "method": "bdev_nvme_attach_controller" 00:18:43.967 } 00:18:43.967 EOF 00:18:43.967 )") 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:43.967 { 00:18:43.967 "params": { 00:18:43.967 "name": "Nvme$subsystem", 00:18:43.967 "trtype": "$TEST_TRANSPORT", 00:18:43.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:43.967 "adrfam": "ipv4", 00:18:43.967 "trsvcid": "$NVMF_PORT", 00:18:43.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:43.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:43.967 "hdgst": ${hdgst:-false}, 00:18:43.967 "ddgst": ${ddgst:-false} 00:18:43.967 }, 00:18:43.967 "method": "bdev_nvme_attach_controller" 00:18:43.967 } 00:18:43.967 EOF 00:18:43.967 )") 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:18:43.967 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:43.968 "params": { 00:18:43.968 "name": "Nvme0", 00:18:43.968 "trtype": "tcp", 00:18:43.968 "traddr": "10.0.0.2", 00:18:43.968 "adrfam": "ipv4", 00:18:43.968 "trsvcid": "4420", 00:18:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:43.968 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:43.968 "hdgst": false, 00:18:43.968 "ddgst": false 00:18:43.968 }, 00:18:43.968 "method": "bdev_nvme_attach_controller" 00:18:43.968 },{ 00:18:43.968 "params": { 00:18:43.968 "name": "Nvme1", 00:18:43.968 "trtype": "tcp", 00:18:43.968 "traddr": "10.0.0.2", 00:18:43.968 "adrfam": "ipv4", 00:18:43.968 "trsvcid": "4420", 00:18:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.968 "hdgst": false, 00:18:43.968 "ddgst": false 00:18:43.968 }, 00:18:43.968 "method": "bdev_nvme_attach_controller" 00:18:43.968 },{ 00:18:43.968 "params": { 00:18:43.968 "name": "Nvme2", 00:18:43.968 "trtype": "tcp", 00:18:43.968 "traddr": "10.0.0.2", 00:18:43.968 "adrfam": "ipv4", 00:18:43.968 "trsvcid": "4420", 00:18:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:43.968 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:43.968 "hdgst": false, 00:18:43.968 "ddgst": false 00:18:43.968 }, 00:18:43.968 "method": "bdev_nvme_attach_controller" 00:18:43.968 }' 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:43.968 20:55:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:44.226 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:44.226 ... 00:18:44.226 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:44.226 ... 00:18:44.226 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:44.226 ... 00:18:44.226 fio-3.35 00:18:44.226 Starting 24 threads 00:18:56.489 00:18:56.489 filename0: (groupid=0, jobs=1): err= 0: pid=82717: Mon Jul 15 20:55:16 2024 00:18:56.489 read: IOPS=315, BW=1262KiB/s (1293kB/s)(12.3MiB/10003msec) 00:18:56.489 slat (usec): min=2, max=11021, avg=26.35, stdev=303.48 00:18:56.489 clat (msec): min=23, max=114, avg=50.58, stdev=12.72 00:18:56.489 lat (msec): min=23, max=114, avg=50.61, stdev=12.72 00:18:56.489 clat percentiles (msec): 00:18:56.489 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 39], 00:18:56.489 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 54], 00:18:56.489 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 66], 95.00th=[ 73], 00:18:56.489 | 99.00th=[ 86], 99.50th=[ 91], 99.90th=[ 97], 99.95th=[ 115], 00:18:56.489 | 99.99th=[ 115] 00:18:56.489 bw ( KiB/s): min= 1125, max= 1408, per=4.24%, avg=1260.47, stdev=74.07, samples=19 00:18:56.489 iops : min= 281, max= 352, avg=315.11, stdev=18.54, samples=19 00:18:56.490 lat (msec) : 50=51.57%, 100=48.37%, 250=0.06% 00:18:56.490 cpu : usr=38.26%, sys=2.81%, ctx=1265, majf=0, minf=9 00:18:56.490 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:18:56.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 issued rwts: total=3157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.490 filename0: (groupid=0, jobs=1): err= 0: pid=82718: Mon Jul 15 20:55:16 2024 00:18:56.490 read: IOPS=304, BW=1217KiB/s (1247kB/s)(11.9MiB/10018msec) 00:18:56.490 slat (usec): min=6, max=8029, avg=32.62, stdev=383.35 00:18:56.490 clat (usec): min=23817, max=99480, avg=52465.33, stdev=11964.82 00:18:56.490 lat (usec): min=23829, max=99493, avg=52497.95, stdev=11966.52 00:18:56.490 clat percentiles (usec): 00:18:56.490 | 1.00th=[26346], 5.00th=[34866], 10.00th=[35914], 20.00th=[45351], 00:18:56.490 | 30.00th=[47973], 40.00th=[47973], 50.00th=[49546], 60.00th=[56886], 00:18:56.490 | 70.00th=[59507], 80.00th=[60031], 90.00th=[69731], 95.00th=[72877], 00:18:56.490 | 99.00th=[84411], 99.50th=[89654], 99.90th=[92799], 99.95th=[95945], 00:18:56.490 | 99.99th=[99091] 00:18:56.490 bw ( KiB/s): min= 1144, max= 1280, per=4.08%, avg=1213.05, stdev=42.80, samples=20 00:18:56.490 iops : min= 286, max= 320, avg=303.25, stdev=10.69, samples=20 00:18:56.490 lat (msec) : 50=50.80%, 100=49.20% 00:18:56.490 cpu : usr=31.24%, sys=1.99%, ctx=863, majf=0, minf=9 00:18:56.490 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=81.9%, 16=17.1%, 32=0.0%, >=64=0.0% 00:18:56.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 complete : 0=0.0%, 4=88.1%, 8=11.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 issued rwts: total=3049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.490 filename0: (groupid=0, jobs=1): err= 0: pid=82719: Mon Jul 15 20:55:16 2024 00:18:56.490 read: IOPS=290, BW=1162KiB/s (1190kB/s)(11.4MiB/10020msec) 00:18:56.490 slat (usec): min=2, max=8024, avg=25.87, stdev=305.78 00:18:56.490 clat (msec): min=24, max=106, avg=54.92, stdev=13.30 00:18:56.490 lat (msec): min=24, max=106, avg=54.95, stdev=13.29 00:18:56.490 clat percentiles (msec): 00:18:56.490 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 46], 00:18:56.490 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 59], 00:18:56.490 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 72], 95.00th=[ 84], 00:18:56.490 | 99.00th=[ 93], 99.50th=[ 93], 99.90th=[ 106], 99.95th=[ 106], 00:18:56.490 | 99.99th=[ 107] 00:18:56.490 bw ( KiB/s): min= 912, max= 1328, per=3.90%, avg=1158.25, stdev=110.16, samples=20 00:18:56.490 iops : min= 228, max= 332, avg=289.55, stdev=27.55, samples=20 00:18:56.490 lat (msec) : 50=40.66%, 100=59.24%, 250=0.10% 00:18:56.490 cpu : usr=33.81%, sys=2.55%, ctx=1251, majf=0, minf=9 00:18:56.490 IO depths : 1=0.1%, 2=1.5%, 4=6.2%, 8=76.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:18:56.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 complete : 0=0.0%, 4=89.4%, 8=9.2%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 issued rwts: total=2912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.490 filename0: (groupid=0, jobs=1): err= 0: pid=82720: Mon Jul 15 20:55:16 2024 00:18:56.490 read: IOPS=301, BW=1207KiB/s (1236kB/s)(11.8MiB/10021msec) 00:18:56.490 slat (usec): min=6, max=8018, avg=22.07, stdev=236.52 00:18:56.490 clat (usec): min=23857, max=97077, avg=52928.81, stdev=12152.92 00:18:56.490 lat (usec): min=23881, max=97085, avg=52950.89, stdev=12154.09 00:18:56.490 clat percentiles (usec): 00:18:56.490 | 1.00th=[27657], 5.00th=[34866], 10.00th=[35914], 20.00th=[44303], 00:18:56.490 | 30.00th=[47973], 40.00th=[48497], 50.00th=[51119], 60.00th=[56361], 00:18:56.490 | 70.00th=[58983], 80.00th=[60031], 90.00th=[69731], 95.00th=[72877], 00:18:56.490 | 99.00th=[85459], 99.50th=[87557], 99.90th=[94897], 99.95th=[95945], 00:18:56.490 | 99.99th=[96994] 00:18:56.490 bw ( KiB/s): min= 1096, max= 1376, per=4.05%, avg=1202.65, stdev=73.46, samples=20 00:18:56.490 iops : min= 274, max= 344, avg=300.65, stdev=18.35, samples=20 00:18:56.490 lat (msec) : 50=44.96%, 100=55.04% 00:18:56.490 cpu : usr=33.69%, sys=2.34%, ctx=1247, majf=0, minf=9 00:18:56.490 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=82.1%, 16=17.4%, 32=0.0%, >=64=0.0% 00:18:56.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 complete : 0=0.0%, 4=88.1%, 8=11.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 issued rwts: total=3023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.490 filename0: (groupid=0, jobs=1): err= 0: pid=82721: Mon Jul 15 20:55:16 2024 00:18:56.490 read: IOPS=278, BW=1113KiB/s (1140kB/s)(10.9MiB/10019msec) 00:18:56.490 slat (usec): min=2, max=8028, avg=30.16, stdev=371.24 00:18:56.490 clat (msec): min=24, max=111, avg=57.34, stdev=13.50 00:18:56.490 lat (msec): min=24, max=112, avg=57.37, stdev=13.50 00:18:56.490 clat percentiles (msec): 00:18:56.490 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:18:56.490 | 30.00th=[ 50], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 61], 00:18:56.490 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 82], 00:18:56.490 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 112], 99.95th=[ 112], 00:18:56.490 | 99.99th=[ 112] 00:18:56.490 bw ( KiB/s): min= 896, max= 1282, per=3.73%, avg=1109.15, stdev=106.72, samples=20 00:18:56.490 iops : min= 224, max= 320, avg=277.25, stdev=26.65, samples=20 00:18:56.490 lat (msec) : 50=32.52%, 100=66.33%, 250=1.15% 00:18:56.490 cpu : usr=33.90%, sys=2.42%, ctx=1129, majf=0, minf=9 00:18:56.490 IO depths : 1=0.1%, 2=2.9%, 4=12.0%, 8=69.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:18:56.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 complete : 0=0.0%, 4=91.2%, 8=6.2%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 issued rwts: total=2789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.490 filename0: (groupid=0, jobs=1): err= 0: pid=82722: Mon Jul 15 20:55:16 2024 00:18:56.490 read: IOPS=304, BW=1217KiB/s (1246kB/s)(11.9MiB/10009msec) 00:18:56.490 slat (usec): min=3, max=8034, avg=19.12, stdev=205.47 00:18:56.490 clat (msec): min=8, max=109, avg=52.53, stdev=13.77 00:18:56.490 lat (msec): min=8, max=109, avg=52.55, stdev=13.77 00:18:56.490 clat percentiles (msec): 00:18:56.490 | 1.00th=[ 25], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 39], 00:18:56.490 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 58], 00:18:56.490 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 81], 00:18:56.490 | 99.00th=[ 86], 99.50th=[ 94], 99.90th=[ 106], 99.95th=[ 110], 00:18:56.490 | 99.99th=[ 110] 00:18:56.490 bw ( KiB/s): min= 894, max= 1408, per=4.06%, avg=1205.05, stdev=105.94, samples=19 00:18:56.490 iops : min= 223, max= 352, avg=301.21, stdev=26.58, samples=19 00:18:56.490 lat (msec) : 10=0.43%, 20=0.10%, 50=48.42%, 100=50.92%, 250=0.13% 00:18:56.490 cpu : usr=31.20%, sys=2.01%, ctx=951, majf=0, minf=9 00:18:56.490 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:18:56.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 issued rwts: total=3044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.490 filename0: (groupid=0, jobs=1): err= 0: pid=82723: Mon Jul 15 20:55:16 2024 00:18:56.490 read: IOPS=301, BW=1206KiB/s (1235kB/s)(11.8MiB/10007msec) 00:18:56.490 slat (usec): min=2, max=8031, avg=20.54, stdev=218.78 00:18:56.490 clat (msec): min=7, max=119, avg=52.96, stdev=13.48 00:18:56.490 lat (msec): min=7, max=119, avg=52.98, stdev=13.48 00:18:56.490 clat percentiles (msec): 00:18:56.490 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 43], 00:18:56.490 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 57], 00:18:56.490 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 75], 00:18:56.490 | 99.00th=[ 89], 99.50th=[ 92], 99.90th=[ 108], 99.95th=[ 120], 00:18:56.490 | 99.99th=[ 120] 00:18:56.490 bw ( KiB/s): min= 1021, max= 1352, per=4.03%, avg=1196.47, stdev=93.05, samples=19 00:18:56.490 iops : min= 255, max= 338, avg=299.11, stdev=23.29, samples=19 00:18:56.490 lat (msec) : 10=0.43%, 20=0.10%, 50=47.02%, 100=52.32%, 250=0.13% 00:18:56.490 cpu : usr=33.95%, sys=2.26%, ctx=996, majf=0, minf=9 00:18:56.490 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=76.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:18:56.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 complete : 0=0.0%, 4=89.1%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 issued rwts: total=3018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.490 filename0: (groupid=0, jobs=1): err= 0: pid=82724: Mon Jul 15 20:55:16 2024 00:18:56.490 read: IOPS=319, BW=1277KiB/s (1307kB/s)(12.5MiB/10045msec) 00:18:56.490 slat (usec): min=6, max=4018, avg=14.65, stdev=100.16 00:18:56.490 clat (usec): min=3192, max=91185, avg=49973.37, stdev=14553.93 00:18:56.490 lat (usec): min=3201, max=91200, avg=49988.01, stdev=14552.53 00:18:56.490 clat percentiles (usec): 00:18:56.490 | 1.00th=[ 4293], 5.00th=[31065], 10.00th=[34341], 20.00th=[39060], 00:18:56.490 | 30.00th=[44827], 40.00th=[47973], 50.00th=[51643], 60.00th=[54789], 00:18:56.490 | 70.00th=[56361], 80.00th=[59507], 90.00th=[65274], 95.00th=[72877], 00:18:56.490 | 99.00th=[84411], 99.50th=[87557], 99.90th=[90702], 99.95th=[90702], 00:18:56.490 | 99.99th=[90702] 00:18:56.490 bw ( KiB/s): min= 1104, max= 2048, per=4.29%, avg=1276.00, stdev=196.48, samples=20 00:18:56.490 iops : min= 276, max= 512, avg=319.00, stdev=49.12, samples=20 00:18:56.490 lat (msec) : 4=1.00%, 10=2.00%, 20=0.50%, 50=41.67%, 100=54.83% 00:18:56.490 cpu : usr=42.68%, sys=3.44%, ctx=1357, majf=0, minf=9 00:18:56.490 IO depths : 1=0.1%, 2=0.8%, 4=2.8%, 8=80.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:56.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.490 issued rwts: total=3206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.490 filename1: (groupid=0, jobs=1): err= 0: pid=82725: Mon Jul 15 20:55:16 2024 00:18:56.490 read: IOPS=337, BW=1350KiB/s (1382kB/s)(13.2MiB/10002msec) 00:18:56.490 slat (usec): min=5, max=8021, avg=23.63, stdev=270.20 00:18:56.490 clat (usec): min=1520, max=110887, avg=47321.53, stdev=16694.12 00:18:56.490 lat (usec): min=1527, max=110904, avg=47345.16, stdev=16699.54 00:18:56.490 clat percentiles (msec): 00:18:56.490 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 32], 20.00th=[ 36], 00:18:56.490 | 30.00th=[ 40], 40.00th=[ 46], 50.00th=[ 49], 60.00th=[ 52], 00:18:56.490 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 64], 95.00th=[ 73], 00:18:56.490 | 99.00th=[ 86], 99.50th=[ 95], 99.90th=[ 96], 99.95th=[ 111], 00:18:56.490 | 99.99th=[ 111] 00:18:56.490 bw ( KiB/s): min= 1000, max= 1400, per=4.28%, avg=1273.95, stdev=91.41, samples=19 00:18:56.490 iops : min= 250, max= 350, avg=318.47, stdev=22.86, samples=19 00:18:56.490 lat (msec) : 2=0.15%, 4=4.12%, 10=1.36%, 20=0.06%, 50=50.43% 00:18:56.490 lat (msec) : 100=43.82%, 250=0.06% 00:18:56.491 cpu : usr=38.00%, sys=2.77%, ctx=1221, majf=0, minf=9 00:18:56.491 IO depths : 1=0.2%, 2=0.7%, 4=2.0%, 8=81.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:18:56.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 issued rwts: total=3375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.491 filename1: (groupid=0, jobs=1): err= 0: pid=82726: Mon Jul 15 20:55:16 2024 00:18:56.491 read: IOPS=304, BW=1217KiB/s (1247kB/s)(11.9MiB/10022msec) 00:18:56.491 slat (usec): min=5, max=8029, avg=24.41, stdev=289.91 00:18:56.491 clat (msec): min=15, max=105, avg=52.48, stdev=12.52 00:18:56.491 lat (msec): min=15, max=105, avg=52.51, stdev=12.53 00:18:56.491 clat percentiles (msec): 00:18:56.491 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 42], 00:18:56.491 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 58], 00:18:56.491 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 70], 95.00th=[ 73], 00:18:56.491 | 99.00th=[ 85], 99.50th=[ 94], 99.90th=[ 96], 99.95th=[ 96], 00:18:56.491 | 99.99th=[ 106] 00:18:56.491 bw ( KiB/s): min= 1096, max= 1344, per=4.08%, avg=1213.05, stdev=60.79, samples=20 00:18:56.491 iops : min= 274, max= 336, avg=303.25, stdev=15.18, samples=20 00:18:56.491 lat (msec) : 20=0.52%, 50=48.00%, 100=51.44%, 250=0.03% 00:18:56.491 cpu : usr=31.05%, sys=2.20%, ctx=867, majf=0, minf=10 00:18:56.491 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.2%, 16=17.0%, 32=0.0%, >=64=0.0% 00:18:56.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 complete : 0=0.0%, 4=88.3%, 8=11.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 issued rwts: total=3050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.491 filename1: (groupid=0, jobs=1): err= 0: pid=82727: Mon Jul 15 20:55:16 2024 00:18:56.491 read: IOPS=313, BW=1253KiB/s (1283kB/s)(12.3MiB/10022msec) 00:18:56.491 slat (usec): min=3, max=4011, avg=15.02, stdev=79.95 00:18:56.491 clat (msec): min=12, max=111, avg=51.02, stdev=12.86 00:18:56.491 lat (msec): min=12, max=111, avg=51.03, stdev=12.86 00:18:56.491 clat percentiles (msec): 00:18:56.491 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 40], 00:18:56.491 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 55], 00:18:56.491 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 66], 95.00th=[ 73], 00:18:56.491 | 99.00th=[ 88], 99.50th=[ 90], 99.90th=[ 100], 99.95th=[ 107], 00:18:56.491 | 99.99th=[ 111] 00:18:56.491 bw ( KiB/s): min= 1016, max= 1536, per=4.20%, avg=1248.65, stdev=104.43, samples=20 00:18:56.491 iops : min= 254, max= 384, avg=312.15, stdev=26.10, samples=20 00:18:56.491 lat (msec) : 20=0.51%, 50=45.14%, 100=54.28%, 250=0.06% 00:18:56.491 cpu : usr=43.65%, sys=2.83%, ctx=1623, majf=0, minf=9 00:18:56.491 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.5%, 16=16.9%, 32=0.0%, >=64=0.0% 00:18:56.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 complete : 0=0.0%, 4=87.8%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 issued rwts: total=3139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.491 filename1: (groupid=0, jobs=1): err= 0: pid=82728: Mon Jul 15 20:55:16 2024 00:18:56.491 read: IOPS=326, BW=1305KiB/s (1336kB/s)(12.8MiB/10004msec) 00:18:56.491 slat (usec): min=6, max=4023, avg=21.41, stdev=164.24 00:18:56.491 clat (msec): min=4, max=107, avg=48.96, stdev=13.54 00:18:56.491 lat (msec): min=4, max=107, avg=48.98, stdev=13.54 00:18:56.491 clat percentiles (msec): 00:18:56.491 | 1.00th=[ 25], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 36], 00:18:56.491 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 53], 00:18:56.491 | 70.00th=[ 56], 80.00th=[ 59], 90.00th=[ 65], 95.00th=[ 72], 00:18:56.491 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 97], 99.95th=[ 108], 00:18:56.491 | 99.99th=[ 108] 00:18:56.491 bw ( KiB/s): min= 1152, max= 1384, per=4.36%, avg=1295.00, stdev=74.53, samples=19 00:18:56.491 iops : min= 288, max= 346, avg=323.74, stdev=18.64, samples=19 00:18:56.491 lat (msec) : 10=0.64%, 20=0.21%, 50=52.08%, 100=47.00%, 250=0.06% 00:18:56.491 cpu : usr=44.39%, sys=3.12%, ctx=1412, majf=0, minf=9 00:18:56.491 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=83.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:18:56.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 issued rwts: total=3264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.491 filename1: (groupid=0, jobs=1): err= 0: pid=82729: Mon Jul 15 20:55:16 2024 00:18:56.491 read: IOPS=311, BW=1246KiB/s (1276kB/s)(12.2MiB/10006msec) 00:18:56.491 slat (usec): min=5, max=8021, avg=24.93, stdev=238.92 00:18:56.491 clat (msec): min=23, max=104, avg=51.25, stdev=12.94 00:18:56.491 lat (msec): min=23, max=104, avg=51.27, stdev=12.94 00:18:56.491 clat percentiles (msec): 00:18:56.491 | 1.00th=[ 28], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:18:56.491 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 55], 00:18:56.491 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 67], 95.00th=[ 75], 00:18:56.491 | 99.00th=[ 86], 99.50th=[ 89], 99.90th=[ 96], 99.95th=[ 97], 00:18:56.491 | 99.99th=[ 105] 00:18:56.491 bw ( KiB/s): min= 928, max= 1408, per=4.18%, avg=1242.37, stdev=103.54, samples=19 00:18:56.491 iops : min= 232, max= 352, avg=310.58, stdev=25.88, samples=19 00:18:56.491 lat (msec) : 50=47.58%, 100=52.39%, 250=0.03% 00:18:56.491 cpu : usr=42.66%, sys=2.85%, ctx=1344, majf=0, minf=9 00:18:56.491 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=81.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:18:56.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 issued rwts: total=3117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.491 filename1: (groupid=0, jobs=1): err= 0: pid=82730: Mon Jul 15 20:55:16 2024 00:18:56.491 read: IOPS=337, BW=1349KiB/s (1382kB/s)(13.2MiB/10001msec) 00:18:56.491 slat (usec): min=2, max=8034, avg=27.12, stdev=296.98 00:18:56.491 clat (usec): min=1136, max=103991, avg=47313.28, stdev=17401.62 00:18:56.491 lat (usec): min=1143, max=104000, avg=47340.40, stdev=17401.13 00:18:56.491 clat percentiles (msec): 00:18:56.491 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 32], 20.00th=[ 36], 00:18:56.491 | 30.00th=[ 39], 40.00th=[ 48], 50.00th=[ 48], 60.00th=[ 52], 00:18:56.491 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 69], 95.00th=[ 72], 00:18:56.491 | 99.00th=[ 86], 99.50th=[ 91], 99.90th=[ 96], 99.95th=[ 105], 00:18:56.491 | 99.99th=[ 105] 00:18:56.491 bw ( KiB/s): min= 1024, max= 1408, per=4.24%, avg=1261.32, stdev=85.17, samples=19 00:18:56.491 iops : min= 256, max= 352, avg=315.32, stdev=21.30, samples=19 00:18:56.491 lat (msec) : 2=0.95%, 4=4.53%, 10=0.71%, 20=0.44%, 50=51.24% 00:18:56.491 lat (msec) : 100=42.06%, 250=0.06% 00:18:56.491 cpu : usr=35.40%, sys=2.39%, ctx=1018, majf=0, minf=9 00:18:56.491 IO depths : 1=0.3%, 2=1.0%, 4=2.9%, 8=80.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:18:56.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 issued rwts: total=3374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.491 filename1: (groupid=0, jobs=1): err= 0: pid=82731: Mon Jul 15 20:55:16 2024 00:18:56.491 read: IOPS=331, BW=1325KiB/s (1356kB/s)(12.9MiB/10001msec) 00:18:56.491 slat (usec): min=6, max=12018, avg=33.46, stdev=381.89 00:18:56.491 clat (usec): min=1083, max=114891, avg=48174.08, stdev=15349.78 00:18:56.491 lat (usec): min=1090, max=114906, avg=48207.54, stdev=15354.42 00:18:56.491 clat percentiles (msec): 00:18:56.491 | 1.00th=[ 3], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 36], 00:18:56.491 | 30.00th=[ 40], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 53], 00:18:56.491 | 70.00th=[ 57], 80.00th=[ 59], 90.00th=[ 64], 95.00th=[ 73], 00:18:56.491 | 99.00th=[ 86], 99.50th=[ 100], 99.90th=[ 100], 99.95th=[ 115], 00:18:56.491 | 99.99th=[ 115] 00:18:56.491 bw ( KiB/s): min= 1112, max= 1440, per=4.31%, avg=1280.95, stdev=82.82, samples=19 00:18:56.491 iops : min= 278, max= 360, avg=320.21, stdev=20.70, samples=19 00:18:56.491 lat (msec) : 2=0.72%, 4=1.84%, 10=0.72%, 20=0.27%, 50=51.42% 00:18:56.491 lat (msec) : 100=44.96%, 250=0.06% 00:18:56.491 cpu : usr=37.03%, sys=2.81%, ctx=1096, majf=0, minf=9 00:18:56.491 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:56.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 issued rwts: total=3312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.491 filename1: (groupid=0, jobs=1): err= 0: pid=82732: Mon Jul 15 20:55:16 2024 00:18:56.491 read: IOPS=303, BW=1215KiB/s (1244kB/s)(11.9MiB/10002msec) 00:18:56.491 slat (usec): min=3, max=8032, avg=30.54, stdev=355.73 00:18:56.491 clat (msec): min=8, max=129, avg=52.53, stdev=14.01 00:18:56.491 lat (msec): min=8, max=129, avg=52.56, stdev=14.01 00:18:56.491 clat percentiles (msec): 00:18:56.491 | 1.00th=[ 25], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 39], 00:18:56.491 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 58], 00:18:56.491 | 70.00th=[ 61], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 78], 00:18:56.491 | 99.00th=[ 86], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 130], 00:18:56.491 | 99.99th=[ 130] 00:18:56.491 bw ( KiB/s): min= 1021, max= 1376, per=4.05%, avg=1204.89, stdev=101.08, samples=19 00:18:56.491 iops : min= 255, max= 344, avg=301.21, stdev=25.30, samples=19 00:18:56.491 lat (msec) : 10=0.46%, 20=0.07%, 50=49.57%, 100=49.37%, 250=0.53% 00:18:56.491 cpu : usr=31.03%, sys=2.24%, ctx=864, majf=0, minf=9 00:18:56.491 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:56.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 complete : 0=0.0%, 4=88.5%, 8=10.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.491 issued rwts: total=3038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.491 filename2: (groupid=0, jobs=1): err= 0: pid=82733: Mon Jul 15 20:55:16 2024 00:18:56.491 read: IOPS=313, BW=1256KiB/s (1286kB/s)(12.3MiB/10015msec) 00:18:56.491 slat (usec): min=2, max=4018, avg=15.09, stdev=71.74 00:18:56.491 clat (msec): min=22, max=112, avg=50.89, stdev=13.41 00:18:56.491 lat (msec): min=22, max=112, avg=50.90, stdev=13.41 00:18:56.491 clat percentiles (msec): 00:18:56.491 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 39], 00:18:56.491 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 55], 00:18:56.491 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 69], 95.00th=[ 75], 00:18:56.491 | 99.00th=[ 89], 99.50th=[ 95], 99.90th=[ 106], 99.95th=[ 113], 00:18:56.491 | 99.99th=[ 113] 00:18:56.491 bw ( KiB/s): min= 1015, max= 1416, per=4.21%, avg=1252.58, stdev=82.33, samples=19 00:18:56.491 iops : min= 253, max= 354, avg=313.05, stdev=20.71, samples=19 00:18:56.491 lat (msec) : 50=49.59%, 100=50.29%, 250=0.13% 00:18:56.491 cpu : usr=39.54%, sys=2.77%, ctx=1212, majf=0, minf=9 00:18:56.491 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:56.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 issued rwts: total=3144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.492 filename2: (groupid=0, jobs=1): err= 0: pid=82734: Mon Jul 15 20:55:16 2024 00:18:56.492 read: IOPS=324, BW=1297KiB/s (1328kB/s)(12.7MiB/10043msec) 00:18:56.492 slat (usec): min=6, max=7042, avg=21.20, stdev=200.54 00:18:56.492 clat (usec): min=1513, max=98852, avg=49206.20, stdev=15619.14 00:18:56.492 lat (usec): min=1533, max=98860, avg=49227.39, stdev=15622.17 00:18:56.492 clat percentiles (usec): 00:18:56.492 | 1.00th=[ 2180], 5.00th=[29230], 10.00th=[32113], 20.00th=[38011], 00:18:56.492 | 30.00th=[42730], 40.00th=[47973], 50.00th=[51119], 60.00th=[54264], 00:18:56.492 | 70.00th=[56361], 80.00th=[59507], 90.00th=[65799], 95.00th=[72877], 00:18:56.492 | 99.00th=[87557], 99.50th=[87557], 99.90th=[90702], 99.95th=[95945], 00:18:56.492 | 99.99th=[99091] 00:18:56.492 bw ( KiB/s): min= 1040, max= 2496, per=4.36%, avg=1295.85, stdev=292.39, samples=20 00:18:56.492 iops : min= 260, max= 624, avg=323.95, stdev=73.10, samples=20 00:18:56.492 lat (msec) : 2=0.74%, 4=1.72%, 10=1.63%, 20=0.34%, 50=43.34% 00:18:56.492 lat (msec) : 100=52.24% 00:18:56.492 cpu : usr=41.89%, sys=2.95%, ctx=1416, majf=0, minf=0 00:18:56.492 IO depths : 1=0.2%, 2=0.7%, 4=1.9%, 8=80.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:18:56.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 issued rwts: total=3256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.492 filename2: (groupid=0, jobs=1): err= 0: pid=82735: Mon Jul 15 20:55:16 2024 00:18:56.492 read: IOPS=304, BW=1217KiB/s (1247kB/s)(11.9MiB/10022msec) 00:18:56.492 slat (usec): min=6, max=8024, avg=31.64, stdev=362.70 00:18:56.492 clat (msec): min=16, max=102, avg=52.46, stdev=12.69 00:18:56.492 lat (msec): min=16, max=102, avg=52.49, stdev=12.70 00:18:56.492 clat percentiles (msec): 00:18:56.492 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 42], 00:18:56.492 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 57], 00:18:56.492 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 70], 95.00th=[ 73], 00:18:56.492 | 99.00th=[ 86], 99.50th=[ 91], 99.90th=[ 97], 99.95th=[ 103], 00:18:56.492 | 99.99th=[ 104] 00:18:56.492 bw ( KiB/s): min= 1112, max= 1352, per=4.08%, avg=1213.05, stdev=61.38, samples=20 00:18:56.492 iops : min= 278, max= 338, avg=303.25, stdev=15.32, samples=20 00:18:56.492 lat (msec) : 20=0.52%, 50=48.62%, 100=50.79%, 250=0.07% 00:18:56.492 cpu : usr=31.05%, sys=2.10%, ctx=962, majf=0, minf=9 00:18:56.492 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.2%, 16=17.0%, 32=0.0%, >=64=0.0% 00:18:56.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 complete : 0=0.0%, 4=88.2%, 8=11.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 issued rwts: total=3050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.492 filename2: (groupid=0, jobs=1): err= 0: pid=82736: Mon Jul 15 20:55:16 2024 00:18:56.492 read: IOPS=305, BW=1223KiB/s (1252kB/s)(11.9MiB/10006msec) 00:18:56.492 slat (usec): min=3, max=8022, avg=24.25, stdev=289.31 00:18:56.492 clat (msec): min=6, max=118, avg=52.24, stdev=13.39 00:18:56.492 lat (msec): min=6, max=118, avg=52.26, stdev=13.38 00:18:56.492 clat percentiles (msec): 00:18:56.492 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 41], 00:18:56.492 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 57], 00:18:56.492 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 70], 95.00th=[ 75], 00:18:56.492 | 99.00th=[ 92], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 120], 00:18:56.492 | 99.99th=[ 120] 00:18:56.492 bw ( KiB/s): min= 1034, max= 1328, per=4.10%, avg=1217.21, stdev=67.11, samples=19 00:18:56.492 iops : min= 258, max= 332, avg=304.26, stdev=16.85, samples=19 00:18:56.492 lat (msec) : 10=0.52%, 20=0.10%, 50=48.68%, 100=50.15%, 250=0.56% 00:18:56.492 cpu : usr=31.29%, sys=1.92%, ctx=968, majf=0, minf=9 00:18:56.492 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=81.8%, 16=16.9%, 32=0.0%, >=64=0.0% 00:18:56.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 complete : 0=0.0%, 4=88.1%, 8=11.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 issued rwts: total=3059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.492 filename2: (groupid=0, jobs=1): err= 0: pid=82737: Mon Jul 15 20:55:16 2024 00:18:56.492 read: IOPS=313, BW=1255KiB/s (1286kB/s)(12.3MiB/10036msec) 00:18:56.492 slat (usec): min=6, max=8020, avg=23.28, stdev=285.20 00:18:56.492 clat (msec): min=4, max=107, avg=50.85, stdev=14.39 00:18:56.492 lat (msec): min=4, max=107, avg=50.87, stdev=14.40 00:18:56.492 clat percentiles (msec): 00:18:56.492 | 1.00th=[ 5], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:18:56.492 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 56], 00:18:56.492 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 68], 95.00th=[ 72], 00:18:56.492 | 99.00th=[ 86], 99.50th=[ 93], 99.90th=[ 95], 99.95th=[ 96], 00:18:56.492 | 99.99th=[ 108] 00:18:56.492 bw ( KiB/s): min= 1064, max= 2088, per=4.22%, avg=1253.60, stdev=208.20, samples=20 00:18:56.492 iops : min= 266, max= 522, avg=313.40, stdev=52.05, samples=20 00:18:56.492 lat (msec) : 10=2.54%, 20=0.51%, 50=46.41%, 100=50.51%, 250=0.03% 00:18:56.492 cpu : usr=35.25%, sys=2.39%, ctx=1037, majf=0, minf=9 00:18:56.492 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.1%, 16=16.9%, 32=0.0%, >=64=0.0% 00:18:56.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 issued rwts: total=3150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.492 filename2: (groupid=0, jobs=1): err= 0: pid=82738: Mon Jul 15 20:55:16 2024 00:18:56.492 read: IOPS=314, BW=1258KiB/s (1288kB/s)(12.3MiB/10034msec) 00:18:56.492 slat (usec): min=4, max=8020, avg=23.03, stdev=264.67 00:18:56.492 clat (usec): min=4347, max=99792, avg=50734.92, stdev=13792.43 00:18:56.492 lat (usec): min=4360, max=99807, avg=50757.95, stdev=13798.54 00:18:56.492 clat percentiles (msec): 00:18:56.492 | 1.00th=[ 10], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 39], 00:18:56.492 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 55], 00:18:56.492 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 67], 95.00th=[ 73], 00:18:56.492 | 99.00th=[ 87], 99.50th=[ 90], 99.90th=[ 92], 99.95th=[ 92], 00:18:56.492 | 99.99th=[ 101] 00:18:56.492 bw ( KiB/s): min= 992, max= 1856, per=4.22%, avg=1255.60, stdev=163.32, samples=20 00:18:56.492 iops : min= 248, max= 464, avg=313.90, stdev=40.83, samples=20 00:18:56.492 lat (msec) : 10=1.08%, 20=0.95%, 50=45.42%, 100=52.55% 00:18:56.492 cpu : usr=38.83%, sys=2.70%, ctx=1297, majf=0, minf=9 00:18:56.492 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=81.9%, 16=16.9%, 32=0.0%, >=64=0.0% 00:18:56.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 complete : 0=0.0%, 4=88.0%, 8=11.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 issued rwts: total=3155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.492 filename2: (groupid=0, jobs=1): err= 0: pid=82739: Mon Jul 15 20:55:16 2024 00:18:56.492 read: IOPS=317, BW=1271KiB/s (1302kB/s)(12.4MiB/10003msec) 00:18:56.492 slat (usec): min=5, max=4022, avg=18.27, stdev=123.18 00:18:56.492 clat (usec): min=23710, max=99181, avg=50272.52, stdev=13074.39 00:18:56.492 lat (usec): min=23719, max=99189, avg=50290.80, stdev=13075.80 00:18:56.492 clat percentiles (usec): 00:18:56.492 | 1.00th=[25297], 5.00th=[31851], 10.00th=[33817], 20.00th=[37487], 00:18:56.492 | 30.00th=[43254], 40.00th=[47449], 50.00th=[48497], 60.00th=[53740], 00:18:56.492 | 70.00th=[56886], 80.00th=[59507], 90.00th=[66847], 95.00th=[73925], 00:18:56.492 | 99.00th=[86508], 99.50th=[90702], 99.90th=[93848], 99.95th=[99091], 00:18:56.492 | 99.99th=[99091] 00:18:56.492 bw ( KiB/s): min= 1152, max= 1408, per=4.27%, avg=1269.74, stdev=68.67, samples=19 00:18:56.492 iops : min= 288, max= 352, avg=317.42, stdev=17.16, samples=19 00:18:56.492 lat (msec) : 50=53.54%, 100=46.46% 00:18:56.492 cpu : usr=37.14%, sys=2.76%, ctx=1063, majf=0, minf=9 00:18:56.492 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=83.1%, 16=16.4%, 32=0.0%, >=64=0.0% 00:18:56.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 issued rwts: total=3179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.492 filename2: (groupid=0, jobs=1): err= 0: pid=82740: Mon Jul 15 20:55:16 2024 00:18:56.492 read: IOPS=274, BW=1100KiB/s (1126kB/s)(10.8MiB/10014msec) 00:18:56.492 slat (usec): min=4, max=10016, avg=32.87, stdev=384.39 00:18:56.492 clat (msec): min=27, max=115, avg=57.98, stdev=13.61 00:18:56.492 lat (msec): min=27, max=115, avg=58.01, stdev=13.62 00:18:56.492 clat percentiles (msec): 00:18:56.492 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:18:56.492 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 60], 00:18:56.492 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 78], 95.00th=[ 83], 00:18:56.492 | 99.00th=[ 90], 99.50th=[ 112], 99.90th=[ 112], 99.95th=[ 115], 00:18:56.492 | 99.99th=[ 115] 00:18:56.492 bw ( KiB/s): min= 896, max= 1264, per=3.67%, avg=1090.26, stdev=107.59, samples=19 00:18:56.492 iops : min= 224, max= 316, avg=272.47, stdev=26.94, samples=19 00:18:56.492 lat (msec) : 50=27.06%, 100=72.28%, 250=0.65% 00:18:56.492 cpu : usr=40.41%, sys=3.04%, ctx=1169, majf=0, minf=9 00:18:56.492 IO depths : 1=0.1%, 2=3.3%, 4=13.7%, 8=68.2%, 16=14.7%, 32=0.0%, >=64=0.0% 00:18:56.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 complete : 0=0.0%, 4=91.4%, 8=5.6%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.492 issued rwts: total=2753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.492 00:18:56.492 Run status group 0 (all jobs): 00:18:56.492 READ: bw=29.0MiB/s (30.4MB/s), 1100KiB/s-1350KiB/s (1126kB/s-1382kB/s), io=291MiB (306MB), run=10001-10045msec 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.492 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 bdev_null0 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 [2024-07-15 20:55:16.923656] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 bdev_null1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:56.493 { 00:18:56.493 "params": { 00:18:56.493 "name": "Nvme$subsystem", 00:18:56.493 "trtype": "$TEST_TRANSPORT", 00:18:56.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.493 "adrfam": "ipv4", 00:18:56.493 "trsvcid": "$NVMF_PORT", 00:18:56.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.493 "hdgst": ${hdgst:-false}, 00:18:56.493 "ddgst": ${ddgst:-false} 00:18:56.493 }, 00:18:56.493 "method": "bdev_nvme_attach_controller" 00:18:56.493 } 00:18:56.493 EOF 00:18:56.493 )") 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:56.493 { 00:18:56.493 "params": { 00:18:56.493 "name": "Nvme$subsystem", 00:18:56.493 "trtype": "$TEST_TRANSPORT", 00:18:56.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.493 "adrfam": "ipv4", 00:18:56.493 "trsvcid": "$NVMF_PORT", 00:18:56.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.493 "hdgst": ${hdgst:-false}, 00:18:56.493 "ddgst": ${ddgst:-false} 00:18:56.493 }, 00:18:56.493 "method": "bdev_nvme_attach_controller" 00:18:56.493 } 00:18:56.493 EOF 00:18:56.493 )") 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:56.493 20:55:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:18:56.493 20:55:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:18:56.493 20:55:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:56.493 "params": { 00:18:56.493 "name": "Nvme0", 00:18:56.493 "trtype": "tcp", 00:18:56.493 "traddr": "10.0.0.2", 00:18:56.493 "adrfam": "ipv4", 00:18:56.493 "trsvcid": "4420", 00:18:56.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:56.493 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:56.493 "hdgst": false, 00:18:56.493 "ddgst": false 00:18:56.493 }, 00:18:56.493 "method": "bdev_nvme_attach_controller" 00:18:56.493 },{ 00:18:56.493 "params": { 00:18:56.493 "name": "Nvme1", 00:18:56.493 "trtype": "tcp", 00:18:56.493 "traddr": "10.0.0.2", 00:18:56.493 "adrfam": "ipv4", 00:18:56.493 "trsvcid": "4420", 00:18:56.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.494 "hdgst": false, 00:18:56.494 "ddgst": false 00:18:56.494 }, 00:18:56.494 "method": "bdev_nvme_attach_controller" 00:18:56.494 }' 00:18:56.494 20:55:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:56.494 20:55:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:56.494 20:55:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.494 20:55:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.494 20:55:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:56.494 20:55:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:56.494 20:55:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:56.494 20:55:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:56.494 20:55:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:56.494 20:55:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:56.494 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:56.494 ... 00:18:56.494 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:56.494 ... 00:18:56.494 fio-3.35 00:18:56.494 Starting 4 threads 00:19:01.785 00:19:01.785 filename0: (groupid=0, jobs=1): err= 0: pid=82887: Mon Jul 15 20:55:22 2024 00:19:01.785 read: IOPS=2787, BW=21.8MiB/s (22.8MB/s)(109MiB/5001msec) 00:19:01.785 slat (nsec): min=5845, max=71863, avg=11755.63, stdev=4018.14 00:19:01.785 clat (usec): min=370, max=5407, avg=2834.84, stdev=768.99 00:19:01.785 lat (usec): min=379, max=5420, avg=2846.60, stdev=769.16 00:19:01.785 clat percentiles (usec): 00:19:01.785 | 1.00th=[ 1188], 5.00th=[ 1647], 10.00th=[ 1680], 20.00th=[ 1909], 00:19:01.785 | 30.00th=[ 2442], 40.00th=[ 2933], 50.00th=[ 3130], 60.00th=[ 3195], 00:19:01.785 | 70.00th=[ 3294], 80.00th=[ 3359], 90.00th=[ 3654], 95.00th=[ 3851], 00:19:01.785 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 4752], 99.95th=[ 4883], 00:19:01.785 | 99.99th=[ 5407] 00:19:01.785 bw ( KiB/s): min=16640, max=25296, per=25.27%, avg=22000.11, stdev=2761.23, samples=9 00:19:01.785 iops : min= 2080, max= 3162, avg=2750.00, stdev=345.14, samples=9 00:19:01.785 lat (usec) : 500=0.01%, 1000=0.44% 00:19:01.785 lat (msec) : 2=24.14%, 4=72.37%, 10=3.05% 00:19:01.785 cpu : usr=90.24%, sys=9.06%, ctx=68, majf=0, minf=0 00:19:01.785 IO depths : 1=0.1%, 2=9.0%, 4=59.1%, 8=31.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.785 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.785 issued rwts: total=13940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.785 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:01.785 filename0: (groupid=0, jobs=1): err= 0: pid=82888: Mon Jul 15 20:55:22 2024 00:19:01.785 read: IOPS=2488, BW=19.4MiB/s (20.4MB/s)(97.2MiB/5001msec) 00:19:01.785 slat (usec): min=5, max=178, avg=13.72, stdev= 4.65 00:19:01.786 clat (usec): min=718, max=5313, avg=3166.26, stdev=501.30 00:19:01.786 lat (usec): min=727, max=5320, avg=3179.98, stdev=501.05 00:19:01.786 clat percentiles (usec): 00:19:01.786 | 1.00th=[ 1205], 5.00th=[ 2008], 10.00th=[ 2540], 20.00th=[ 2933], 00:19:01.786 | 30.00th=[ 3130], 40.00th=[ 3195], 50.00th=[ 3359], 60.00th=[ 3425], 00:19:01.786 | 70.00th=[ 3425], 80.00th=[ 3458], 90.00th=[ 3490], 95.00th=[ 3621], 00:19:01.786 | 99.00th=[ 3982], 99.50th=[ 4015], 99.90th=[ 4080], 99.95th=[ 4359], 00:19:01.786 | 99.99th=[ 4948] 00:19:01.786 bw ( KiB/s): min=18432, max=23568, per=23.07%, avg=20087.11, stdev=1638.38, samples=9 00:19:01.786 iops : min= 2304, max= 2946, avg=2510.89, stdev=204.80, samples=9 00:19:01.786 lat (usec) : 750=0.02%, 1000=0.60% 00:19:01.786 lat (msec) : 2=4.37%, 4=94.17%, 10=0.84% 00:19:01.786 cpu : usr=90.98%, sys=7.98%, ctx=109, majf=0, minf=10 00:19:01.786 IO depths : 1=0.1%, 2=18.7%, 4=54.2%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.786 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.786 issued rwts: total=12443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.786 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:01.786 filename1: (groupid=0, jobs=1): err= 0: pid=82889: Mon Jul 15 20:55:22 2024 00:19:01.786 read: IOPS=3123, BW=24.4MiB/s (25.6MB/s)(122MiB/5003msec) 00:19:01.786 slat (nsec): min=5882, max=62635, avg=9605.26, stdev=3579.43 00:19:01.786 clat (usec): min=561, max=5204, avg=2537.65, stdev=786.70 00:19:01.786 lat (usec): min=568, max=5215, avg=2547.26, stdev=786.91 00:19:01.786 clat percentiles (usec): 00:19:01.786 | 1.00th=[ 996], 5.00th=[ 1057], 10.00th=[ 1418], 20.00th=[ 1696], 00:19:01.786 | 30.00th=[ 1958], 40.00th=[ 2278], 50.00th=[ 2802], 60.00th=[ 2966], 00:19:01.786 | 70.00th=[ 3097], 80.00th=[ 3326], 90.00th=[ 3392], 95.00th=[ 3556], 00:19:01.786 | 99.00th=[ 3884], 99.50th=[ 3916], 99.90th=[ 3982], 99.95th=[ 4015], 00:19:01.786 | 99.99th=[ 5211] 00:19:01.786 bw ( KiB/s): min=21892, max=27392, per=28.70%, avg=24990.44, stdev=1731.49, samples=9 00:19:01.786 iops : min= 2736, max= 3424, avg=3123.67, stdev=216.56, samples=9 00:19:01.786 lat (usec) : 750=0.08%, 1000=0.93% 00:19:01.786 lat (msec) : 2=30.20%, 4=68.74%, 10=0.05% 00:19:01.786 cpu : usr=90.34%, sys=8.88%, ctx=10, majf=0, minf=0 00:19:01.786 IO depths : 1=0.1%, 2=1.5%, 4=63.4%, 8=35.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.786 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.786 issued rwts: total=15626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.786 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:01.786 filename1: (groupid=0, jobs=1): err= 0: pid=82890: Mon Jul 15 20:55:22 2024 00:19:01.786 read: IOPS=2487, BW=19.4MiB/s (20.4MB/s)(97.2MiB/5001msec) 00:19:01.786 slat (nsec): min=6132, max=47977, avg=13173.42, stdev=3074.12 00:19:01.786 clat (usec): min=944, max=4970, avg=3170.89, stdev=500.34 00:19:01.786 lat (usec): min=957, max=4993, avg=3184.06, stdev=500.36 00:19:01.786 clat percentiles (usec): 00:19:01.786 | 1.00th=[ 1205], 5.00th=[ 2008], 10.00th=[ 2540], 20.00th=[ 2933], 00:19:01.786 | 30.00th=[ 3130], 40.00th=[ 3195], 50.00th=[ 3359], 60.00th=[ 3425], 00:19:01.786 | 70.00th=[ 3458], 80.00th=[ 3458], 90.00th=[ 3490], 95.00th=[ 3621], 00:19:01.786 | 99.00th=[ 3982], 99.50th=[ 4015], 99.90th=[ 4113], 99.95th=[ 4948], 00:19:01.786 | 99.99th=[ 4948] 00:19:01.786 bw ( KiB/s): min=18432, max=23616, per=23.07%, avg=20087.56, stdev=1650.29, samples=9 00:19:01.786 iops : min= 2304, max= 2952, avg=2510.89, stdev=206.23, samples=9 00:19:01.786 lat (usec) : 1000=0.50% 00:19:01.786 lat (msec) : 2=4.48%, 4=94.18%, 10=0.84% 00:19:01.786 cpu : usr=90.96%, sys=8.46%, ctx=7, majf=0, minf=9 00:19:01.786 IO depths : 1=0.1%, 2=18.7%, 4=54.2%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.786 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.786 issued rwts: total=12438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.786 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:01.786 00:19:01.786 Run status group 0 (all jobs): 00:19:01.786 READ: bw=85.0MiB/s (89.2MB/s), 19.4MiB/s-24.4MiB/s (20.4MB/s-25.6MB/s), io=425MiB (446MB), run=5001-5003msec 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:01.786 ************************************ 00:19:01.786 END TEST fio_dif_rand_params 00:19:01.786 ************************************ 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.786 00:19:01.786 real 0m23.411s 00:19:01.786 user 2m1.802s 00:19:01.786 sys 0m10.234s 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:01.786 20:55:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:01.786 20:55:23 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:01.786 20:55:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:01.786 20:55:23 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:01.786 20:55:23 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:01.786 20:55:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:01.786 ************************************ 00:19:01.786 START TEST fio_dif_digest 00:19:01.786 ************************************ 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:01.786 bdev_null0 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:01.786 [2024-07-15 20:55:23.092531] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.786 { 00:19:01.786 "params": { 00:19:01.786 "name": "Nvme$subsystem", 00:19:01.786 "trtype": "$TEST_TRANSPORT", 00:19:01.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.786 "adrfam": "ipv4", 00:19:01.786 "trsvcid": "$NVMF_PORT", 00:19:01.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.786 "hdgst": ${hdgst:-false}, 00:19:01.786 "ddgst": ${ddgst:-false} 00:19:01.786 }, 00:19:01.786 "method": "bdev_nvme_attach_controller" 00:19:01.786 } 00:19:01.786 EOF 00:19:01.786 )") 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:01.786 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:01.787 "params": { 00:19:01.787 "name": "Nvme0", 00:19:01.787 "trtype": "tcp", 00:19:01.787 "traddr": "10.0.0.2", 00:19:01.787 "adrfam": "ipv4", 00:19:01.787 "trsvcid": "4420", 00:19:01.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:01.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:01.787 "hdgst": true, 00:19:01.787 "ddgst": true 00:19:01.787 }, 00:19:01.787 "method": "bdev_nvme_attach_controller" 00:19:01.787 }' 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:01.787 20:55:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.787 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:01.787 ... 00:19:01.787 fio-3.35 00:19:01.787 Starting 3 threads 00:19:13.980 00:19:13.980 filename0: (groupid=0, jobs=1): err= 0: pid=83001: Mon Jul 15 20:55:33 2024 00:19:13.980 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(364MiB/10007msec) 00:19:13.980 slat (nsec): min=6202, max=30783, avg=8920.84, stdev=2948.13 00:19:13.980 clat (usec): min=7660, max=10776, avg=10293.92, stdev=93.43 00:19:13.980 lat (usec): min=7667, max=10790, avg=10302.84, stdev=93.67 00:19:13.980 clat percentiles (usec): 00:19:13.980 | 1.00th=[10290], 5.00th=[10290], 10.00th=[10290], 20.00th=[10290], 00:19:13.980 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10290], 00:19:13.980 | 70.00th=[10290], 80.00th=[10290], 90.00th=[10290], 95.00th=[10290], 00:19:13.980 | 99.00th=[10421], 99.50th=[10552], 99.90th=[10814], 99.95th=[10814], 00:19:13.980 | 99.99th=[10814] 00:19:13.980 bw ( KiB/s): min=36864, max=37632, per=33.33%, avg=37209.60, stdev=392.00, samples=20 00:19:13.980 iops : min= 288, max= 294, avg=290.70, stdev= 3.06, samples=20 00:19:13.980 lat (msec) : 10=0.10%, 20=99.90% 00:19:13.980 cpu : usr=88.61%, sys=10.97%, ctx=18, majf=0, minf=0 00:19:13.980 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.980 issued rwts: total=2910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.980 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:13.980 filename0: (groupid=0, jobs=1): err= 0: pid=83002: Mon Jul 15 20:55:33 2024 00:19:13.980 read: IOPS=290, BW=36.4MiB/s (38.1MB/s)(364MiB/10005msec) 00:19:13.980 slat (nsec): min=6171, max=28452, avg=8556.64, stdev=2590.00 00:19:13.980 clat (usec): min=5620, max=11143, avg=10293.33, stdev=156.81 00:19:13.980 lat (usec): min=5627, max=11171, avg=10301.89, stdev=156.96 00:19:13.980 clat percentiles (usec): 00:19:13.980 | 1.00th=[10290], 5.00th=[10290], 10.00th=[10290], 20.00th=[10290], 00:19:13.980 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10290], 00:19:13.980 | 70.00th=[10290], 80.00th=[10290], 90.00th=[10290], 95.00th=[10290], 00:19:13.980 | 99.00th=[10421], 99.50th=[10552], 99.90th=[11076], 99.95th=[11076], 00:19:13.980 | 99.99th=[11207] 00:19:13.980 bw ( KiB/s): min=36864, max=37632, per=33.33%, avg=37209.60, stdev=392.00, samples=20 00:19:13.980 iops : min= 288, max= 294, avg=290.70, stdev= 3.06, samples=20 00:19:13.980 lat (msec) : 10=0.10%, 20=99.90% 00:19:13.980 cpu : usr=90.20%, sys=9.40%, ctx=11, majf=0, minf=0 00:19:13.980 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.980 issued rwts: total=2910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.980 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:13.980 filename0: (groupid=0, jobs=1): err= 0: pid=83003: Mon Jul 15 20:55:33 2024 00:19:13.980 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(363MiB/10001msec) 00:19:13.980 slat (nsec): min=6194, max=49197, avg=8607.49, stdev=2760.24 00:19:13.980 clat (usec): min=10237, max=12484, avg=10299.33, stdev=78.04 00:19:13.980 lat (usec): min=10244, max=12533, avg=10307.94, stdev=78.69 00:19:13.980 clat percentiles (usec): 00:19:13.981 | 1.00th=[10290], 5.00th=[10290], 10.00th=[10290], 20.00th=[10290], 00:19:13.981 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10290], 00:19:13.981 | 70.00th=[10290], 80.00th=[10290], 90.00th=[10290], 95.00th=[10290], 00:19:13.981 | 99.00th=[10421], 99.50th=[10552], 99.90th=[12518], 99.95th=[12518], 00:19:13.981 | 99.99th=[12518] 00:19:13.981 bw ( KiB/s): min=36864, max=37632, per=33.31%, avg=37187.37, stdev=389.57, samples=19 00:19:13.981 iops : min= 288, max= 294, avg=290.53, stdev= 3.04, samples=19 00:19:13.981 lat (msec) : 20=100.00% 00:19:13.981 cpu : usr=89.60%, sys=9.99%, ctx=91, majf=0, minf=0 00:19:13.981 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:13.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.981 issued rwts: total=2907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.981 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:13.981 00:19:13.981 Run status group 0 (all jobs): 00:19:13.981 READ: bw=109MiB/s (114MB/s), 36.3MiB/s-36.4MiB/s (38.1MB/s-38.1MB/s), io=1091MiB (1144MB), run=10001-10007msec 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.981 00:19:13.981 real 0m10.999s 00:19:13.981 user 0m27.486s 00:19:13.981 sys 0m3.345s 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:13.981 ************************************ 00:19:13.981 END TEST fio_dif_digest 00:19:13.981 ************************************ 00:19:13.981 20:55:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:13.981 20:55:34 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:13.981 20:55:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:13.981 20:55:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:13.981 rmmod nvme_tcp 00:19:13.981 rmmod nvme_fabrics 00:19:13.981 rmmod nvme_keyring 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 82232 ']' 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 82232 00:19:13.981 20:55:34 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 82232 ']' 00:19:13.981 20:55:34 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 82232 00:19:13.981 20:55:34 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:19:13.981 20:55:34 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:13.981 20:55:34 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82232 00:19:13.981 killing process with pid 82232 00:19:13.981 20:55:34 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:13.981 20:55:34 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:13.981 20:55:34 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82232' 00:19:13.981 20:55:34 nvmf_dif -- common/autotest_common.sh@967 -- # kill 82232 00:19:13.981 20:55:34 nvmf_dif -- common/autotest_common.sh@972 -- # wait 82232 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:19:13.981 20:55:34 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:13.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:13.981 Waiting for block devices as requested 00:19:13.981 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.981 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.981 20:55:35 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:13.981 20:55:35 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:13.981 20:55:35 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:13.981 20:55:35 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:13.981 20:55:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.981 20:55:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:13.981 20:55:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.981 20:55:35 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:13.981 ************************************ 00:19:13.981 END TEST nvmf_dif 00:19:13.981 ************************************ 00:19:13.981 00:19:13.981 real 1m0.104s 00:19:13.981 user 3m45.272s 00:19:13.981 sys 0m23.241s 00:19:13.981 20:55:35 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:13.981 20:55:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:13.981 20:55:35 -- common/autotest_common.sh@1142 -- # return 0 00:19:13.981 20:55:35 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:13.981 20:55:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:13.981 20:55:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.981 20:55:35 -- common/autotest_common.sh@10 -- # set +x 00:19:13.981 ************************************ 00:19:13.981 START TEST nvmf_abort_qd_sizes 00:19:13.981 ************************************ 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:13.981 * Looking for test storage... 00:19:13.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.981 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:13.982 Cannot find device "nvmf_tgt_br" 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:13.982 Cannot find device "nvmf_tgt_br2" 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:13.982 Cannot find device "nvmf_tgt_br" 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:13.982 Cannot find device "nvmf_tgt_br2" 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:13.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:13.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:13.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:13.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:13.982 00:19:13.982 --- 10.0.0.2 ping statistics --- 00:19:13.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.982 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:13.982 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:13.982 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:19:13.982 00:19:13.982 --- 10.0.0.3 ping statistics --- 00:19:13.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.982 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:13.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:13.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:19:13.982 00:19:13.982 --- 10.0.0.1 ping statistics --- 00:19:13.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.982 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:13.982 20:55:35 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:14.924 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:14.924 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:15.180 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=83600 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 83600 00:19:15.180 20:55:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 83600 ']' 00:19:15.181 20:55:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.181 20:55:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.181 20:55:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.181 20:55:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.181 20:55:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:15.181 [2024-07-15 20:55:36.997567] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:19:15.181 [2024-07-15 20:55:36.997638] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.438 [2024-07-15 20:55:37.144512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:15.438 [2024-07-15 20:55:37.228781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.438 [2024-07-15 20:55:37.228837] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.438 [2024-07-15 20:55:37.228847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.438 [2024-07-15 20:55:37.228855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.438 [2024-07-15 20:55:37.228862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.438 [2024-07-15 20:55:37.229847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.438 [2024-07-15 20:55:37.229929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.438 [2024-07-15 20:55:37.230017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.438 [2024-07-15 20:55:37.230019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.438 [2024-07-15 20:55:37.271740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:19:16.005 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.273 20:55:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:16.273 ************************************ 00:19:16.273 START TEST spdk_target_abort 00:19:16.273 ************************************ 00:19:16.273 20:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:19:16.273 20:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:16.273 20:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:19:16.273 20:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.273 20:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:16.273 spdk_targetn1 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:16.273 [2024-07-15 20:55:38.051321] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:16.273 [2024-07-15 20:55:38.083431] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:16.273 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:16.274 20:55:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:19.557 Initializing NVMe Controllers 00:19:19.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:19:19.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:19.557 Initialization complete. Launching workers. 00:19:19.557 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13342, failed: 0 00:19:19.557 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1106, failed to submit 12236 00:19:19.557 success 711, unsuccess 395, failed 0 00:19:19.557 20:55:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:19.557 20:55:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:22.843 Initializing NVMe Controllers 00:19:22.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:19:22.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:22.843 Initialization complete. Launching workers. 00:19:22.843 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8966, failed: 0 00:19:22.843 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1141, failed to submit 7825 00:19:22.843 success 361, unsuccess 780, failed 0 00:19:22.843 20:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:22.843 20:55:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:26.127 Initializing NVMe Controllers 00:19:26.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:19:26.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:26.127 Initialization complete. Launching workers. 00:19:26.127 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36461, failed: 0 00:19:26.127 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2343, failed to submit 34118 00:19:26.127 success 600, unsuccess 1743, failed 0 00:19:26.127 20:55:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:19:26.127 20:55:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.127 20:55:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:26.127 20:55:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.127 20:55:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:26.127 20:55:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.127 20:55:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83600 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 83600 ']' 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 83600 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83600 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:26.694 killing process with pid 83600 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83600' 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 83600 00:19:26.694 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 83600 00:19:26.953 00:19:26.953 real 0m10.821s 00:19:26.953 user 0m43.232s 00:19:26.953 sys 0m2.608s 00:19:26.953 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:26.953 20:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:26.953 ************************************ 00:19:26.953 END TEST spdk_target_abort 00:19:26.953 ************************************ 00:19:26.953 20:55:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:19:26.953 20:55:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:19:26.953 20:55:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:26.953 20:55:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:26.953 20:55:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:26.953 ************************************ 00:19:26.953 START TEST kernel_target_abort 00:19:26.953 ************************************ 00:19:26.953 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:19:26.953 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:19:26.953 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:27.211 20:55:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:27.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:27.777 Waiting for block devices as requested 00:19:27.777 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:27.777 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:28.036 No valid GPT data, bailing 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:28.036 No valid GPT data, bailing 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:28.036 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:28.037 No valid GPT data, bailing 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:28.037 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:28.296 No valid GPT data, bailing 00:19:28.296 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:28.296 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:19:28.296 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:19:28.296 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:28.296 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:28.296 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:28.296 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:28.296 20:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e --hostid=69e37e11-dc2b-47bc-a2e9-49065053d84e -a 10.0.0.1 -t tcp -s 4420 00:19:28.296 00:19:28.296 Discovery Log Number of Records 2, Generation counter 2 00:19:28.296 =====Discovery Log Entry 0====== 00:19:28.296 trtype: tcp 00:19:28.296 adrfam: ipv4 00:19:28.296 subtype: current discovery subsystem 00:19:28.296 treq: not specified, sq flow control disable supported 00:19:28.296 portid: 1 00:19:28.296 trsvcid: 4420 00:19:28.296 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:28.296 traddr: 10.0.0.1 00:19:28.296 eflags: none 00:19:28.296 sectype: none 00:19:28.296 =====Discovery Log Entry 1====== 00:19:28.296 trtype: tcp 00:19:28.296 adrfam: ipv4 00:19:28.296 subtype: nvme subsystem 00:19:28.296 treq: not specified, sq flow control disable supported 00:19:28.296 portid: 1 00:19:28.296 trsvcid: 4420 00:19:28.296 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:28.296 traddr: 10.0.0.1 00:19:28.296 eflags: none 00:19:28.296 sectype: none 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:28.296 20:55:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:31.579 Initializing NVMe Controllers 00:19:31.579 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:31.579 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:31.579 Initialization complete. Launching workers. 00:19:31.579 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39013, failed: 0 00:19:31.579 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39013, failed to submit 0 00:19:31.579 success 0, unsuccess 39013, failed 0 00:19:31.579 20:55:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:31.579 20:55:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:34.859 Initializing NVMe Controllers 00:19:34.859 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:34.859 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:34.859 Initialization complete. Launching workers. 00:19:34.859 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86904, failed: 0 00:19:34.859 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 41235, failed to submit 45669 00:19:34.859 success 0, unsuccess 41235, failed 0 00:19:34.859 20:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:34.859 20:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:38.167 Initializing NVMe Controllers 00:19:38.167 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:38.167 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:38.167 Initialization complete. Launching workers. 00:19:38.167 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 112901, failed: 0 00:19:38.167 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28190, failed to submit 84711 00:19:38.167 success 0, unsuccess 28190, failed 0 00:19:38.167 20:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:19:38.167 20:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:38.167 20:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:19:38.167 20:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:38.167 20:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:38.167 20:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:38.167 20:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:38.167 20:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:38.167 20:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:38.167 20:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:38.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:42.022 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:42.022 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:42.022 00:19:42.022 real 0m14.595s 00:19:42.022 user 0m6.521s 00:19:42.023 sys 0m5.523s 00:19:42.023 ************************************ 00:19:42.023 20:56:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:42.023 20:56:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:42.023 END TEST kernel_target_abort 00:19:42.023 ************************************ 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:42.023 rmmod nvme_tcp 00:19:42.023 rmmod nvme_fabrics 00:19:42.023 rmmod nvme_keyring 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 83600 ']' 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 83600 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 83600 ']' 00:19:42.023 Process with pid 83600 is not found 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 83600 00:19:42.023 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (83600) - No such process 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 83600 is not found' 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:19:42.023 20:56:03 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:42.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:42.281 Waiting for block devices as requested 00:19:42.541 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:42.541 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:42.541 20:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:42.541 20:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:42.541 20:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:42.541 20:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:42.541 20:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.541 20:56:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:42.541 20:56:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.541 20:56:04 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:42.541 00:19:42.541 real 0m29.080s 00:19:42.541 user 0m50.962s 00:19:42.541 sys 0m9.940s 00:19:42.541 20:56:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:42.541 20:56:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:42.541 ************************************ 00:19:42.541 END TEST nvmf_abort_qd_sizes 00:19:42.541 ************************************ 00:19:42.800 20:56:04 -- common/autotest_common.sh@1142 -- # return 0 00:19:42.800 20:56:04 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:19:42.800 20:56:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:42.800 20:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.800 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:19:42.800 ************************************ 00:19:42.800 START TEST keyring_file 00:19:42.800 ************************************ 00:19:42.800 20:56:04 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:19:42.800 * Looking for test storage... 00:19:42.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:19:42.801 20:56:04 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.801 20:56:04 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.801 20:56:04 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.801 20:56:04 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.801 20:56:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.801 20:56:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.801 20:56:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.801 20:56:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:19:42.801 20:56:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@47 -- # : 0 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:19:42.801 20:56:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:19:42.801 20:56:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:19:42.801 20:56:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:19:42.801 20:56:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:19:42.801 20:56:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:19:42.801 20:56:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AGmMFQ6y3r 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AGmMFQ6y3r 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AGmMFQ6y3r 00:19:42.801 20:56:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AGmMFQ6y3r 00:19:42.801 20:56:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.z93bFuXJID 00:19:42.801 20:56:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:19:42.801 20:56:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:19:43.061 20:56:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.z93bFuXJID 00:19:43.061 20:56:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.z93bFuXJID 00:19:43.061 20:56:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.z93bFuXJID 00:19:43.061 20:56:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=84472 00:19:43.061 20:56:04 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.061 20:56:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84472 00:19:43.061 20:56:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 84472 ']' 00:19:43.061 20:56:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.061 20:56:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.061 20:56:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.061 20:56:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.061 20:56:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:43.061 [2024-07-15 20:56:04.806030] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:19:43.061 [2024-07-15 20:56:04.806113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84472 ] 00:19:43.061 [2024-07-15 20:56:04.947065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.320 [2024-07-15 20:56:05.030253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.320 [2024-07-15 20:56:05.071055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:19:43.889 20:56:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:43.889 [2024-07-15 20:56:05.646599] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.889 null0 00:19:43.889 [2024-07-15 20:56:05.678529] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:43.889 [2024-07-15 20:56:05.678738] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:19:43.889 [2024-07-15 20:56:05.686500] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.889 20:56:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.889 20:56:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:43.889 [2024-07-15 20:56:05.702473] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:19:43.889 request: 00:19:43.889 { 00:19:43.889 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:19:43.889 "secure_channel": false, 00:19:43.889 "listen_address": { 00:19:43.889 "trtype": "tcp", 00:19:43.889 "traddr": "127.0.0.1", 00:19:43.889 "trsvcid": "4420" 00:19:43.889 }, 00:19:43.889 "method": "nvmf_subsystem_add_listener", 00:19:43.889 "req_id": 1 00:19:43.889 } 00:19:43.889 Got JSON-RPC error response 00:19:43.890 response: 00:19:43.890 { 00:19:43.890 "code": -32602, 00:19:43.890 "message": "Invalid parameters" 00:19:43.890 } 00:19:43.890 20:56:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:43.890 20:56:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:19:43.890 20:56:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:43.890 20:56:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:43.890 20:56:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:43.890 20:56:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=84488 00:19:43.890 20:56:05 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:19:43.890 20:56:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 84488 /var/tmp/bperf.sock 00:19:43.890 20:56:05 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 84488 ']' 00:19:43.890 20:56:05 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:43.890 20:56:05 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:43.890 20:56:05 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:43.890 20:56:05 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.890 20:56:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:43.890 [2024-07-15 20:56:05.763873] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:19:43.890 [2024-07-15 20:56:05.763945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84488 ] 00:19:44.149 [2024-07-15 20:56:05.905394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.149 [2024-07-15 20:56:06.002995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.149 [2024-07-15 20:56:06.044412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:44.715 20:56:06 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.715 20:56:06 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:19:44.715 20:56:06 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AGmMFQ6y3r 00:19:44.715 20:56:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AGmMFQ6y3r 00:19:44.973 20:56:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.z93bFuXJID 00:19:44.973 20:56:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.z93bFuXJID 00:19:45.231 20:56:06 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:19:45.231 20:56:06 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:19:45.231 20:56:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:45.231 20:56:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:45.231 20:56:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:45.489 20:56:07 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.AGmMFQ6y3r == \/\t\m\p\/\t\m\p\.\A\G\m\M\F\Q\6\y\3\r ]] 00:19:45.489 20:56:07 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:19:45.489 20:56:07 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:19:45.489 20:56:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:45.489 20:56:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:45.489 20:56:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:45.747 20:56:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.z93bFuXJID == \/\t\m\p\/\t\m\p\.\z\9\3\b\F\u\X\J\I\D ]] 00:19:45.747 20:56:07 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:19:45.747 20:56:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:45.747 20:56:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:45.747 20:56:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:45.747 20:56:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:45.747 20:56:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:45.747 20:56:07 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:19:45.747 20:56:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:19:45.747 20:56:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:45.747 20:56:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:45.747 20:56:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:45.747 20:56:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:45.747 20:56:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:46.005 20:56:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:19:46.005 20:56:07 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:46.005 20:56:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:46.263 [2024-07-15 20:56:08.000415] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.263 nvme0n1 00:19:46.263 20:56:08 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:19:46.263 20:56:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:46.263 20:56:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:46.263 20:56:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:46.263 20:56:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:46.263 20:56:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:46.520 20:56:08 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:19:46.520 20:56:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:19:46.521 20:56:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:46.521 20:56:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:46.521 20:56:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:46.521 20:56:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:46.521 20:56:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:46.779 20:56:08 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:19:46.779 20:56:08 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:46.779 Running I/O for 1 seconds... 00:19:48.150 00:19:48.150 Latency(us) 00:19:48.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.150 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:19:48.150 nvme0n1 : 1.00 16120.69 62.97 0.00 0.00 7923.23 3842.67 17792.10 00:19:48.150 =================================================================================================================== 00:19:48.150 Total : 16120.69 62.97 0.00 0.00 7923.23 3842.67 17792.10 00:19:48.150 0 00:19:48.150 20:56:09 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:19:48.150 20:56:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:19:48.150 20:56:09 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:19:48.150 20:56:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:48.150 20:56:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:48.150 20:56:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:48.150 20:56:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:48.150 20:56:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:48.150 20:56:10 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:19:48.150 20:56:10 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:19:48.150 20:56:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:48.150 20:56:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:48.150 20:56:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:48.150 20:56:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:48.150 20:56:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:48.407 20:56:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:19:48.407 20:56:10 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:48.407 20:56:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:19:48.407 20:56:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:48.407 20:56:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:19:48.407 20:56:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.407 20:56:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:19:48.407 20:56:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.407 20:56:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:48.407 20:56:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:48.664 [2024-07-15 20:56:10.424575] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:48.664 [2024-07-15 20:56:10.425507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1530d50 (107): Transport endpoint is not connected 00:19:48.664 [2024-07-15 20:56:10.426495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1530d50 (9): Bad file descriptor 00:19:48.664 [2024-07-15 20:56:10.427491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:48.664 [2024-07-15 20:56:10.427508] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:19:48.664 [2024-07-15 20:56:10.427517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:48.664 request: 00:19:48.664 { 00:19:48.664 "name": "nvme0", 00:19:48.664 "trtype": "tcp", 00:19:48.664 "traddr": "127.0.0.1", 00:19:48.664 "adrfam": "ipv4", 00:19:48.664 "trsvcid": "4420", 00:19:48.664 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:48.664 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:48.664 "prchk_reftag": false, 00:19:48.664 "prchk_guard": false, 00:19:48.664 "hdgst": false, 00:19:48.664 "ddgst": false, 00:19:48.664 "psk": "key1", 00:19:48.664 "method": "bdev_nvme_attach_controller", 00:19:48.664 "req_id": 1 00:19:48.664 } 00:19:48.664 Got JSON-RPC error response 00:19:48.664 response: 00:19:48.664 { 00:19:48.664 "code": -5, 00:19:48.664 "message": "Input/output error" 00:19:48.664 } 00:19:48.664 20:56:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:19:48.664 20:56:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:48.664 20:56:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:48.664 20:56:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:48.665 20:56:10 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:19:48.665 20:56:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:48.665 20:56:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:48.665 20:56:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:48.665 20:56:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:48.665 20:56:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:48.923 20:56:10 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:19:48.923 20:56:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:19:48.923 20:56:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:48.923 20:56:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:48.923 20:56:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:48.923 20:56:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:48.923 20:56:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:49.181 20:56:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:19:49.181 20:56:10 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:19:49.181 20:56:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:19:49.181 20:56:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:19:49.181 20:56:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:19:49.439 20:56:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:19:49.439 20:56:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:49.439 20:56:11 keyring_file -- keyring/file.sh@77 -- # jq length 00:19:49.697 20:56:11 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:19:49.697 20:56:11 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.AGmMFQ6y3r 00:19:49.697 20:56:11 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AGmMFQ6y3r 00:19:49.697 20:56:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:19:49.697 20:56:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AGmMFQ6y3r 00:19:49.697 20:56:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:19:49.697 20:56:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.697 20:56:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:19:49.697 20:56:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.697 20:56:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AGmMFQ6y3r 00:19:49.697 20:56:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AGmMFQ6y3r 00:19:49.955 [2024-07-15 20:56:11.641019] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AGmMFQ6y3r': 0100660 00:19:49.955 [2024-07-15 20:56:11.641070] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:49.955 request: 00:19:49.955 { 00:19:49.955 "name": "key0", 00:19:49.955 "path": "/tmp/tmp.AGmMFQ6y3r", 00:19:49.955 "method": "keyring_file_add_key", 00:19:49.955 "req_id": 1 00:19:49.955 } 00:19:49.955 Got JSON-RPC error response 00:19:49.955 response: 00:19:49.955 { 00:19:49.955 "code": -1, 00:19:49.955 "message": "Operation not permitted" 00:19:49.955 } 00:19:49.955 20:56:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:19:49.955 20:56:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:49.955 20:56:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:49.955 20:56:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:49.955 20:56:11 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.AGmMFQ6y3r 00:19:49.955 20:56:11 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AGmMFQ6y3r 00:19:49.955 20:56:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AGmMFQ6y3r 00:19:49.955 20:56:11 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.AGmMFQ6y3r 00:19:50.212 20:56:11 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:19:50.212 20:56:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:50.212 20:56:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:50.212 20:56:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:50.212 20:56:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:50.212 20:56:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:50.212 20:56:12 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:19:50.212 20:56:12 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:50.212 20:56:12 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:19:50.212 20:56:12 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:50.212 20:56:12 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:19:50.212 20:56:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.212 20:56:12 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:19:50.212 20:56:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.213 20:56:12 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:50.213 20:56:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:50.471 [2024-07-15 20:56:12.224191] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AGmMFQ6y3r': No such file or directory 00:19:50.471 [2024-07-15 20:56:12.224229] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:19:50.471 [2024-07-15 20:56:12.224253] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:19:50.471 [2024-07-15 20:56:12.224261] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:50.471 [2024-07-15 20:56:12.224269] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:19:50.471 request: 00:19:50.471 { 00:19:50.471 "name": "nvme0", 00:19:50.471 "trtype": "tcp", 00:19:50.471 "traddr": "127.0.0.1", 00:19:50.471 "adrfam": "ipv4", 00:19:50.471 "trsvcid": "4420", 00:19:50.471 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:50.471 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:50.471 "prchk_reftag": false, 00:19:50.471 "prchk_guard": false, 00:19:50.471 "hdgst": false, 00:19:50.471 "ddgst": false, 00:19:50.471 "psk": "key0", 00:19:50.471 "method": "bdev_nvme_attach_controller", 00:19:50.471 "req_id": 1 00:19:50.471 } 00:19:50.471 Got JSON-RPC error response 00:19:50.471 response: 00:19:50.471 { 00:19:50.471 "code": -19, 00:19:50.471 "message": "No such device" 00:19:50.471 } 00:19:50.471 20:56:12 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:19:50.471 20:56:12 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:50.471 20:56:12 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:50.471 20:56:12 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:50.471 20:56:12 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:19:50.471 20:56:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:19:50.729 20:56:12 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:19:50.729 20:56:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:19:50.729 20:56:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:19:50.729 20:56:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:19:50.729 20:56:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:19:50.729 20:56:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:19:50.729 20:56:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yj3IixZVL8 00:19:50.729 20:56:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:19:50.729 20:56:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:19:50.729 20:56:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:19:50.729 20:56:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:50.729 20:56:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:50.729 20:56:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:19:50.729 20:56:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:19:50.729 20:56:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yj3IixZVL8 00:19:50.729 20:56:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yj3IixZVL8 00:19:50.729 20:56:12 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.yj3IixZVL8 00:19:50.729 20:56:12 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yj3IixZVL8 00:19:50.729 20:56:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yj3IixZVL8 00:19:50.988 20:56:12 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:50.988 20:56:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:51.246 nvme0n1 00:19:51.246 20:56:12 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:19:51.246 20:56:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:51.247 20:56:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:51.247 20:56:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:51.247 20:56:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:51.247 20:56:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:51.504 20:56:13 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:19:51.504 20:56:13 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:19:51.504 20:56:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:19:51.504 20:56:13 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:19:51.504 20:56:13 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:19:51.504 20:56:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:51.505 20:56:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:51.505 20:56:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:51.764 20:56:13 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:19:51.764 20:56:13 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:19:51.764 20:56:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:51.764 20:56:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:51.764 20:56:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:51.764 20:56:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:51.764 20:56:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:52.021 20:56:13 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:19:52.021 20:56:13 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:19:52.021 20:56:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:19:52.280 20:56:13 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:19:52.280 20:56:13 keyring_file -- keyring/file.sh@104 -- # jq length 00:19:52.280 20:56:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:52.280 20:56:14 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:19:52.280 20:56:14 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yj3IixZVL8 00:19:52.280 20:56:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yj3IixZVL8 00:19:52.539 20:56:14 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.z93bFuXJID 00:19:52.539 20:56:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.z93bFuXJID 00:19:52.797 20:56:14 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:52.797 20:56:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:53.056 nvme0n1 00:19:53.056 20:56:14 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:19:53.056 20:56:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:19:53.315 20:56:15 keyring_file -- keyring/file.sh@112 -- # config='{ 00:19:53.315 "subsystems": [ 00:19:53.315 { 00:19:53.315 "subsystem": "keyring", 00:19:53.315 "config": [ 00:19:53.315 { 00:19:53.315 "method": "keyring_file_add_key", 00:19:53.315 "params": { 00:19:53.315 "name": "key0", 00:19:53.315 "path": "/tmp/tmp.yj3IixZVL8" 00:19:53.315 } 00:19:53.315 }, 00:19:53.315 { 00:19:53.315 "method": "keyring_file_add_key", 00:19:53.315 "params": { 00:19:53.315 "name": "key1", 00:19:53.315 "path": "/tmp/tmp.z93bFuXJID" 00:19:53.315 } 00:19:53.315 } 00:19:53.315 ] 00:19:53.315 }, 00:19:53.315 { 00:19:53.315 "subsystem": "iobuf", 00:19:53.315 "config": [ 00:19:53.315 { 00:19:53.315 "method": "iobuf_set_options", 00:19:53.315 "params": { 00:19:53.315 "small_pool_count": 8192, 00:19:53.315 "large_pool_count": 1024, 00:19:53.315 "small_bufsize": 8192, 00:19:53.315 "large_bufsize": 135168 00:19:53.315 } 00:19:53.315 } 00:19:53.315 ] 00:19:53.315 }, 00:19:53.315 { 00:19:53.315 "subsystem": "sock", 00:19:53.315 "config": [ 00:19:53.315 { 00:19:53.315 "method": "sock_set_default_impl", 00:19:53.315 "params": { 00:19:53.315 "impl_name": "uring" 00:19:53.315 } 00:19:53.315 }, 00:19:53.315 { 00:19:53.315 "method": "sock_impl_set_options", 00:19:53.315 "params": { 00:19:53.315 "impl_name": "ssl", 00:19:53.315 "recv_buf_size": 4096, 00:19:53.315 "send_buf_size": 4096, 00:19:53.315 "enable_recv_pipe": true, 00:19:53.315 "enable_quickack": false, 00:19:53.315 "enable_placement_id": 0, 00:19:53.315 "enable_zerocopy_send_server": true, 00:19:53.315 "enable_zerocopy_send_client": false, 00:19:53.315 "zerocopy_threshold": 0, 00:19:53.315 "tls_version": 0, 00:19:53.315 "enable_ktls": false 00:19:53.315 } 00:19:53.315 }, 00:19:53.315 { 00:19:53.315 "method": "sock_impl_set_options", 00:19:53.315 "params": { 00:19:53.315 "impl_name": "posix", 00:19:53.315 "recv_buf_size": 2097152, 00:19:53.315 "send_buf_size": 2097152, 00:19:53.315 "enable_recv_pipe": true, 00:19:53.315 "enable_quickack": false, 00:19:53.315 "enable_placement_id": 0, 00:19:53.316 "enable_zerocopy_send_server": true, 00:19:53.316 "enable_zerocopy_send_client": false, 00:19:53.316 "zerocopy_threshold": 0, 00:19:53.316 "tls_version": 0, 00:19:53.316 "enable_ktls": false 00:19:53.316 } 00:19:53.316 }, 00:19:53.316 { 00:19:53.316 "method": "sock_impl_set_options", 00:19:53.316 "params": { 00:19:53.316 "impl_name": "uring", 00:19:53.316 "recv_buf_size": 2097152, 00:19:53.316 "send_buf_size": 2097152, 00:19:53.316 "enable_recv_pipe": true, 00:19:53.316 "enable_quickack": false, 00:19:53.316 "enable_placement_id": 0, 00:19:53.316 "enable_zerocopy_send_server": false, 00:19:53.316 "enable_zerocopy_send_client": false, 00:19:53.316 "zerocopy_threshold": 0, 00:19:53.316 "tls_version": 0, 00:19:53.316 "enable_ktls": false 00:19:53.316 } 00:19:53.316 } 00:19:53.316 ] 00:19:53.316 }, 00:19:53.316 { 00:19:53.316 "subsystem": "vmd", 00:19:53.316 "config": [] 00:19:53.316 }, 00:19:53.316 { 00:19:53.316 "subsystem": "accel", 00:19:53.316 "config": [ 00:19:53.316 { 00:19:53.316 "method": "accel_set_options", 00:19:53.316 "params": { 00:19:53.316 "small_cache_size": 128, 00:19:53.316 "large_cache_size": 16, 00:19:53.316 "task_count": 2048, 00:19:53.316 "sequence_count": 2048, 00:19:53.316 "buf_count": 2048 00:19:53.316 } 00:19:53.316 } 00:19:53.316 ] 00:19:53.316 }, 00:19:53.316 { 00:19:53.316 "subsystem": "bdev", 00:19:53.316 "config": [ 00:19:53.316 { 00:19:53.316 "method": "bdev_set_options", 00:19:53.316 "params": { 00:19:53.316 "bdev_io_pool_size": 65535, 00:19:53.316 "bdev_io_cache_size": 256, 00:19:53.316 "bdev_auto_examine": true, 00:19:53.316 "iobuf_small_cache_size": 128, 00:19:53.316 "iobuf_large_cache_size": 16 00:19:53.316 } 00:19:53.316 }, 00:19:53.316 { 00:19:53.316 "method": "bdev_raid_set_options", 00:19:53.316 "params": { 00:19:53.316 "process_window_size_kb": 1024 00:19:53.316 } 00:19:53.316 }, 00:19:53.316 { 00:19:53.316 "method": "bdev_iscsi_set_options", 00:19:53.316 "params": { 00:19:53.316 "timeout_sec": 30 00:19:53.316 } 00:19:53.316 }, 00:19:53.316 { 00:19:53.316 "method": "bdev_nvme_set_options", 00:19:53.316 "params": { 00:19:53.316 "action_on_timeout": "none", 00:19:53.316 "timeout_us": 0, 00:19:53.316 "timeout_admin_us": 0, 00:19:53.316 "keep_alive_timeout_ms": 10000, 00:19:53.316 "arbitration_burst": 0, 00:19:53.316 "low_priority_weight": 0, 00:19:53.316 "medium_priority_weight": 0, 00:19:53.316 "high_priority_weight": 0, 00:19:53.316 "nvme_adminq_poll_period_us": 10000, 00:19:53.316 "nvme_ioq_poll_period_us": 0, 00:19:53.316 "io_queue_requests": 512, 00:19:53.316 "delay_cmd_submit": true, 00:19:53.316 "transport_retry_count": 4, 00:19:53.316 "bdev_retry_count": 3, 00:19:53.316 "transport_ack_timeout": 0, 00:19:53.316 "ctrlr_loss_timeout_sec": 0, 00:19:53.316 "reconnect_delay_sec": 0, 00:19:53.316 "fast_io_fail_timeout_sec": 0, 00:19:53.316 "disable_auto_failback": false, 00:19:53.316 "generate_uuids": false, 00:19:53.316 "transport_tos": 0, 00:19:53.316 "nvme_error_stat": false, 00:19:53.316 "rdma_srq_size": 0, 00:19:53.316 "io_path_stat": false, 00:19:53.316 "allow_accel_sequence": false, 00:19:53.316 "rdma_max_cq_size": 0, 00:19:53.316 "rdma_cm_event_timeout_ms": 0, 00:19:53.316 "dhchap_digests": [ 00:19:53.316 "sha256", 00:19:53.316 "sha384", 00:19:53.316 "sha512" 00:19:53.316 ], 00:19:53.316 "dhchap_dhgroups": [ 00:19:53.316 "null", 00:19:53.316 "ffdhe2048", 00:19:53.316 "ffdhe3072", 00:19:53.316 "ffdhe4096", 00:19:53.316 "ffdhe6144", 00:19:53.316 "ffdhe8192" 00:19:53.316 ] 00:19:53.316 } 00:19:53.316 }, 00:19:53.316 { 00:19:53.316 "method": "bdev_nvme_attach_controller", 00:19:53.316 "params": { 00:19:53.316 "name": "nvme0", 00:19:53.316 "trtype": "TCP", 00:19:53.316 "adrfam": "IPv4", 00:19:53.316 "traddr": "127.0.0.1", 00:19:53.316 "trsvcid": "4420", 00:19:53.316 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:53.316 "prchk_reftag": false, 00:19:53.316 "prchk_guard": false, 00:19:53.316 "ctrlr_loss_timeout_sec": 0, 00:19:53.316 "reconnect_delay_sec": 0, 00:19:53.316 "fast_io_fail_timeout_sec": 0, 00:19:53.316 "psk": "key0", 00:19:53.316 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:53.316 "hdgst": false, 00:19:53.316 "ddgst": false 00:19:53.316 } 00:19:53.316 }, 00:19:53.316 { 00:19:53.316 "method": "bdev_nvme_set_hotplug", 00:19:53.316 "params": { 00:19:53.316 "period_us": 100000, 00:19:53.316 "enable": false 00:19:53.316 } 00:19:53.316 }, 00:19:53.316 { 00:19:53.316 "method": "bdev_wait_for_examine" 00:19:53.316 } 00:19:53.316 ] 00:19:53.316 }, 00:19:53.316 { 00:19:53.316 "subsystem": "nbd", 00:19:53.316 "config": [] 00:19:53.316 } 00:19:53.316 ] 00:19:53.316 }' 00:19:53.316 20:56:15 keyring_file -- keyring/file.sh@114 -- # killprocess 84488 00:19:53.316 20:56:15 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 84488 ']' 00:19:53.316 20:56:15 keyring_file -- common/autotest_common.sh@952 -- # kill -0 84488 00:19:53.316 20:56:15 keyring_file -- common/autotest_common.sh@953 -- # uname 00:19:53.316 20:56:15 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.316 20:56:15 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84488 00:19:53.316 killing process with pid 84488 00:19:53.316 Received shutdown signal, test time was about 1.000000 seconds 00:19:53.316 00:19:53.316 Latency(us) 00:19:53.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.316 =================================================================================================================== 00:19:53.316 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.316 20:56:15 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:53.316 20:56:15 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:53.316 20:56:15 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84488' 00:19:53.316 20:56:15 keyring_file -- common/autotest_common.sh@967 -- # kill 84488 00:19:53.316 20:56:15 keyring_file -- common/autotest_common.sh@972 -- # wait 84488 00:19:53.576 20:56:15 keyring_file -- keyring/file.sh@117 -- # bperfpid=84716 00:19:53.576 20:56:15 keyring_file -- keyring/file.sh@119 -- # waitforlisten 84716 /var/tmp/bperf.sock 00:19:53.576 20:56:15 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 84716 ']' 00:19:53.576 20:56:15 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:19:53.576 "subsystems": [ 00:19:53.576 { 00:19:53.576 "subsystem": "keyring", 00:19:53.576 "config": [ 00:19:53.576 { 00:19:53.576 "method": "keyring_file_add_key", 00:19:53.576 "params": { 00:19:53.576 "name": "key0", 00:19:53.576 "path": "/tmp/tmp.yj3IixZVL8" 00:19:53.576 } 00:19:53.576 }, 00:19:53.576 { 00:19:53.576 "method": "keyring_file_add_key", 00:19:53.576 "params": { 00:19:53.576 "name": "key1", 00:19:53.576 "path": "/tmp/tmp.z93bFuXJID" 00:19:53.576 } 00:19:53.576 } 00:19:53.576 ] 00:19:53.576 }, 00:19:53.576 { 00:19:53.576 "subsystem": "iobuf", 00:19:53.576 "config": [ 00:19:53.576 { 00:19:53.576 "method": "iobuf_set_options", 00:19:53.576 "params": { 00:19:53.576 "small_pool_count": 8192, 00:19:53.576 "large_pool_count": 1024, 00:19:53.576 "small_bufsize": 8192, 00:19:53.576 "large_bufsize": 135168 00:19:53.576 } 00:19:53.576 } 00:19:53.576 ] 00:19:53.576 }, 00:19:53.576 { 00:19:53.576 "subsystem": "sock", 00:19:53.576 "config": [ 00:19:53.576 { 00:19:53.576 "method": "sock_set_default_impl", 00:19:53.576 "params": { 00:19:53.576 "impl_name": "uring" 00:19:53.576 } 00:19:53.576 }, 00:19:53.576 { 00:19:53.576 "method": "sock_impl_set_options", 00:19:53.576 "params": { 00:19:53.576 "impl_name": "ssl", 00:19:53.576 "recv_buf_size": 4096, 00:19:53.576 "send_buf_size": 4096, 00:19:53.576 "enable_recv_pipe": true, 00:19:53.576 "enable_quickack": false, 00:19:53.576 "enable_placement_id": 0, 00:19:53.576 "enable_zerocopy_send_server": true, 00:19:53.576 "enable_zerocopy_send_client": false, 00:19:53.576 "zerocopy_threshold": 0, 00:19:53.576 "tls_version": 0, 00:19:53.576 "enable_ktls": false 00:19:53.576 } 00:19:53.576 }, 00:19:53.576 { 00:19:53.576 "method": "sock_impl_set_options", 00:19:53.576 "params": { 00:19:53.576 "impl_name": "posix", 00:19:53.576 "recv_buf_size": 2097152, 00:19:53.576 "send_buf_size": 2097152, 00:19:53.576 "enable_recv_pipe": true, 00:19:53.576 "enable_quickack": false, 00:19:53.576 "enable_placement_id": 0, 00:19:53.576 "enable_zerocopy_send_server": true, 00:19:53.576 "enable_zerocopy_send_client": false, 00:19:53.576 "zerocopy_threshold": 0, 00:19:53.576 "tls_version": 0, 00:19:53.577 "enable_ktls": false 00:19:53.577 } 00:19:53.577 }, 00:19:53.577 { 00:19:53.577 "method": "sock_impl_set_options", 00:19:53.577 "params": { 00:19:53.577 "impl_name": "uring", 00:19:53.577 "recv_buf_size": 2097152, 00:19:53.577 "send_buf_size": 2097152, 00:19:53.577 "enable_recv_pipe": true, 00:19:53.577 "enable_quickack": false, 00:19:53.577 "enable_placement_id": 0, 00:19:53.577 "enable_zerocopy_send_server": false, 00:19:53.577 "enable_zerocopy_send_client": false, 00:19:53.577 "zerocopy_threshold": 0, 00:19:53.577 "tls_version": 0, 00:19:53.577 "enable_ktls": false 00:19:53.577 } 00:19:53.577 } 00:19:53.577 ] 00:19:53.577 }, 00:19:53.577 { 00:19:53.577 "subsystem": "vmd", 00:19:53.577 "config": [] 00:19:53.577 }, 00:19:53.577 { 00:19:53.577 "subsystem": "accel", 00:19:53.577 "config": [ 00:19:53.577 { 00:19:53.577 "method": "accel_set_options", 00:19:53.577 "params": { 00:19:53.577 "small_cache_size": 128, 00:19:53.577 "large_cache_size": 16, 00:19:53.577 "task_count": 2048, 00:19:53.577 "sequence_count": 2048, 00:19:53.577 "buf_count": 2048 00:19:53.577 } 00:19:53.577 } 00:19:53.577 ] 00:19:53.577 }, 00:19:53.577 { 00:19:53.577 "subsystem": "bdev", 00:19:53.577 "config": [ 00:19:53.577 { 00:19:53.577 "method": "bdev_set_options", 00:19:53.577 "params": { 00:19:53.577 "bdev_io_pool_size": 65535, 00:19:53.577 "bdev_io_cache_size": 256, 00:19:53.577 "bdev_auto_examine": true, 00:19:53.577 "iobuf_small_cache_size": 128, 00:19:53.577 "iobuf_large_cache_size": 16 00:19:53.577 } 00:19:53.577 }, 00:19:53.577 { 00:19:53.577 "method": "bdev_raid_set_options", 00:19:53.577 "params": { 00:19:53.577 "process_window_size_kb": 1024 00:19:53.577 } 00:19:53.577 }, 00:19:53.577 { 00:19:53.577 "method": "bdev_iscsi_set_options", 00:19:53.577 "params": { 00:19:53.577 "timeout_sec": 30 00:19:53.577 } 00:19:53.577 }, 00:19:53.577 { 00:19:53.577 "method": "bdev_nvme_set_options", 00:19:53.577 "params": { 00:19:53.577 "action_on_timeout": "none", 00:19:53.577 "timeout_us": 0, 00:19:53.577 "timeout_admin_us": 0, 00:19:53.577 "keep_alive_timeout_ms": 10000, 00:19:53.577 "arbitration_burst": 0, 00:19:53.577 "low_priority_weight": 0, 00:19:53.577 "medium_priority_weight": 0, 00:19:53.577 "high_priority_weight": 0, 00:19:53.577 "nvme_adminq_poll_period_us": 10000, 00:19:53.577 "nvme_ioq_poll_period_us": 0, 00:19:53.577 "io_queue_requests": 512, 00:19:53.577 "delay_cmd_submit": true, 00:19:53.577 "transport_retry_count": 4, 00:19:53.577 "bdev_retry_count": 3, 00:19:53.577 "transport_ack_timeout": 0, 00:19:53.577 "ctrlr_loss_timeout_sec": 0, 00:19:53.577 "reconnect_delay_sec": 0, 00:19:53.577 "fast_io_fail_timeout_sec": 0, 00:19:53.577 "disable_auto_failback": false, 00:19:53.577 "generate_uuids": false, 00:19:53.577 "transport_tos": 0, 00:19:53.577 "nvme_error_stat": false, 00:19:53.577 "rdma_srq_size": 0, 00:19:53.577 "io_path_stat": false, 00:19:53.577 "allow_accel_sequence": false, 00:19:53.577 "rdma_max_cq_size": 0, 00:19:53.577 "rdma_cm_event_timeout_ms": 0, 00:19:53.577 "dhchap_digests": [ 00:19:53.577 "sha256", 00:19:53.577 "sha384", 00:19:53.577 "sha512" 00:19:53.577 ], 00:19:53.577 "dhchap_dhgroups": [ 00:19:53.577 "null", 00:19:53.577 "ffdhe2048", 00:19:53.577 "ffdhe3072", 00:19:53.577 "ffdhe4096", 00:19:53.577 "ffdhe6144", 00:19:53.577 "ffdhe8192" 00:19:53.577 ] 00:19:53.577 } 00:19:53.577 }, 00:19:53.577 { 00:19:53.577 "method": "bdev_nvme_attach_controller", 00:19:53.577 "params": { 00:19:53.577 "name": "nvme0", 00:19:53.577 "trtype": "TCP", 00:19:53.577 "adrfam": "IPv4", 00:19:53.577 "traddr": "127.0.0.1", 00:19:53.577 "trsvcid": "4420", 00:19:53.577 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:53.577 "prchk_reftag": false, 00:19:53.577 "prchk_guard": false, 00:19:53.577 "ctrlr_loss_timeout_sec": 0, 00:19:53.577 "reconnect_delay_sec": 0, 00:19:53.577 "fast_io_fail_timeout_sec": 0, 00:19:53.577 "psk": "key0", 00:19:53.577 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:53.577 "hdgst": false, 00:19:53.577 "ddgst": false 00:19:53.577 } 00:19:53.577 }, 00:19:53.577 { 00:19:53.577 "method": "bdev_nvme_set_hotplug", 00:19:53.577 "params": { 00:19:53.577 "period_us": 100000, 00:19:53.577 "enable": false 00:19:53.577 } 00:19:53.577 }, 00:19:53.577 { 00:19:53.577 "method": "bdev_wait_for_examine" 00:19:53.577 } 00:19:53.577 ] 00:19:53.577 }, 00:19:53.577 { 00:19:53.577 "subsystem": "nbd", 00:19:53.577 "config": [] 00:19:53.577 } 00:19:53.577 ] 00:19:53.577 }' 00:19:53.577 20:56:15 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:19:53.577 20:56:15 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:53.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:53.577 20:56:15 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.577 20:56:15 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:53.577 20:56:15 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.577 20:56:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:53.577 [2024-07-15 20:56:15.291461] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:19:53.577 [2024-07-15 20:56:15.291528] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84716 ] 00:19:53.577 [2024-07-15 20:56:15.435349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.837 [2024-07-15 20:56:15.523991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.837 [2024-07-15 20:56:15.646399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:53.837 [2024-07-15 20:56:15.693749] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.405 20:56:16 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.405 20:56:16 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:19:54.405 20:56:16 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:19:54.405 20:56:16 keyring_file -- keyring/file.sh@120 -- # jq length 00:19:54.405 20:56:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:54.665 20:56:16 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:19:54.665 20:56:16 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:19:54.665 20:56:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:54.665 20:56:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:54.665 20:56:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:54.665 20:56:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:54.665 20:56:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:54.665 20:56:16 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:19:54.665 20:56:16 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:19:54.665 20:56:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:54.665 20:56:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:54.665 20:56:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:54.665 20:56:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:54.665 20:56:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:54.923 20:56:16 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:19:54.923 20:56:16 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:19:54.923 20:56:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:19:54.923 20:56:16 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:19:55.182 20:56:16 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:19:55.182 20:56:16 keyring_file -- keyring/file.sh@1 -- # cleanup 00:19:55.182 20:56:16 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.yj3IixZVL8 /tmp/tmp.z93bFuXJID 00:19:55.182 20:56:16 keyring_file -- keyring/file.sh@20 -- # killprocess 84716 00:19:55.182 20:56:16 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 84716 ']' 00:19:55.182 20:56:16 keyring_file -- common/autotest_common.sh@952 -- # kill -0 84716 00:19:55.182 20:56:16 keyring_file -- common/autotest_common.sh@953 -- # uname 00:19:55.182 20:56:16 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.182 20:56:16 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84716 00:19:55.182 killing process with pid 84716 00:19:55.182 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.182 00:19:55.182 Latency(us) 00:19:55.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.182 =================================================================================================================== 00:19:55.182 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.182 20:56:16 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:55.182 20:56:16 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:55.182 20:56:16 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84716' 00:19:55.182 20:56:16 keyring_file -- common/autotest_common.sh@967 -- # kill 84716 00:19:55.182 20:56:16 keyring_file -- common/autotest_common.sh@972 -- # wait 84716 00:19:55.442 20:56:17 keyring_file -- keyring/file.sh@21 -- # killprocess 84472 00:19:55.442 20:56:17 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 84472 ']' 00:19:55.442 20:56:17 keyring_file -- common/autotest_common.sh@952 -- # kill -0 84472 00:19:55.442 20:56:17 keyring_file -- common/autotest_common.sh@953 -- # uname 00:19:55.442 20:56:17 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.442 20:56:17 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84472 00:19:55.442 killing process with pid 84472 00:19:55.442 20:56:17 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:55.442 20:56:17 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:55.442 20:56:17 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84472' 00:19:55.442 20:56:17 keyring_file -- common/autotest_common.sh@967 -- # kill 84472 00:19:55.442 [2024-07-15 20:56:17.172330] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:55.442 20:56:17 keyring_file -- common/autotest_common.sh@972 -- # wait 84472 00:19:55.699 00:19:55.699 real 0m13.015s 00:19:55.699 user 0m30.909s 00:19:55.699 sys 0m3.236s 00:19:55.699 20:56:17 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.699 20:56:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:55.699 ************************************ 00:19:55.699 END TEST keyring_file 00:19:55.699 ************************************ 00:19:55.699 20:56:17 -- common/autotest_common.sh@1142 -- # return 0 00:19:55.699 20:56:17 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:19:55.699 20:56:17 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:19:55.699 20:56:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:55.699 20:56:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.699 20:56:17 -- common/autotest_common.sh@10 -- # set +x 00:19:55.699 ************************************ 00:19:55.699 START TEST keyring_linux 00:19:55.699 ************************************ 00:19:55.699 20:56:17 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:19:55.958 * Looking for test storage... 00:19:55.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:19:55.958 20:56:17 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69e37e11-dc2b-47bc-a2e9-49065053d84e 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=69e37e11-dc2b-47bc-a2e9-49065053d84e 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:55.958 20:56:17 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.958 20:56:17 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.958 20:56:17 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.958 20:56:17 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.958 20:56:17 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.958 20:56:17 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.958 20:56:17 keyring_linux -- paths/export.sh@5 -- # export PATH 00:19:55.958 20:56:17 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:19:55.958 20:56:17 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:19:55.958 20:56:17 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:19:55.958 20:56:17 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:19:55.958 20:56:17 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:19:55.958 20:56:17 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:19:55.958 20:56:17 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@705 -- # python - 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:19:55.958 /tmp/:spdk-test:key0 00:19:55.958 20:56:17 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:19:55.958 20:56:17 keyring_linux -- nvmf/common.sh@705 -- # python - 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:19:55.958 20:56:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:19:55.958 /tmp/:spdk-test:key1 00:19:55.958 20:56:17 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84829 00:19:55.958 20:56:17 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.958 20:56:17 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84829 00:19:55.958 20:56:17 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 84829 ']' 00:19:55.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.958 20:56:17 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.958 20:56:17 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.958 20:56:17 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.958 20:56:17 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.959 20:56:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:19:56.217 [2024-07-15 20:56:17.890895] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:19:56.217 [2024-07-15 20:56:17.890968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84829 ] 00:19:56.217 [2024-07-15 20:56:18.032431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.217 [2024-07-15 20:56:18.110627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.476 [2024-07-15 20:56:18.151294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:57.043 20:56:18 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.043 20:56:18 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:19:57.043 20:56:18 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:19:57.043 20:56:18 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.043 20:56:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:19:57.043 [2024-07-15 20:56:18.724990] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.043 null0 00:19:57.043 [2024-07-15 20:56:18.756906] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.043 [2024-07-15 20:56:18.757232] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:19:57.043 20:56:18 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.043 20:56:18 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:19:57.043 336123842 00:19:57.043 20:56:18 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:19:57.043 556022977 00:19:57.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:57.043 20:56:18 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84847 00:19:57.043 20:56:18 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:19:57.043 20:56:18 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84847 /var/tmp/bperf.sock 00:19:57.043 20:56:18 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 84847 ']' 00:19:57.043 20:56:18 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:57.043 20:56:18 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.043 20:56:18 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:57.043 20:56:18 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.043 20:56:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:19:57.043 [2024-07-15 20:56:18.837822] Starting SPDK v24.09-pre git sha1 20d0fd684 / DPDK 24.03.0 initialization... 00:19:57.043 [2024-07-15 20:56:18.838026] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84847 ] 00:19:57.303 [2024-07-15 20:56:18.973849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.303 [2024-07-15 20:56:19.064559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.870 20:56:19 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.870 20:56:19 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:19:57.870 20:56:19 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:19:57.870 20:56:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:19:58.129 20:56:19 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:19:58.129 20:56:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:58.388 [2024-07-15 20:56:20.056668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:58.388 20:56:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:19:58.388 20:56:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:19:58.388 [2024-07-15 20:56:20.267478] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.646 nvme0n1 00:19:58.646 20:56:20 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:19:58.646 20:56:20 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:19:58.646 20:56:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:19:58.646 20:56:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:19:58.646 20:56:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:19:58.646 20:56:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:58.646 20:56:20 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:19:58.646 20:56:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:19:58.646 20:56:20 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:19:58.646 20:56:20 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:19:58.906 20:56:20 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:58.906 20:56:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:58.906 20:56:20 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:19:58.906 20:56:20 keyring_linux -- keyring/linux.sh@25 -- # sn=336123842 00:19:58.906 20:56:20 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:19:58.906 20:56:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:19:58.906 20:56:20 keyring_linux -- keyring/linux.sh@26 -- # [[ 336123842 == \3\3\6\1\2\3\8\4\2 ]] 00:19:58.906 20:56:20 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 336123842 00:19:58.906 20:56:20 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:19:58.906 20:56:20 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:59.164 Running I/O for 1 seconds... 00:20:00.099 00:20:00.099 Latency(us) 00:20:00.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.099 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:00.099 nvme0n1 : 1.01 18431.09 72.00 0.00 0.00 6916.10 5790.33 13423.04 00:20:00.099 =================================================================================================================== 00:20:00.099 Total : 18431.09 72.00 0.00 0.00 6916.10 5790.33 13423.04 00:20:00.099 0 00:20:00.099 20:56:21 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:00.099 20:56:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:00.356 20:56:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:20:00.356 20:56:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:20:00.356 20:56:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:00.356 20:56:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:00.356 20:56:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:00.356 20:56:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:00.614 20:56:22 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:20:00.614 20:56:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:00.614 20:56:22 keyring_linux -- keyring/linux.sh@23 -- # return 00:20:00.614 20:56:22 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:00.615 20:56:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:00.615 [2024-07-15 20:56:22.474829] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:00.615 [2024-07-15 20:56:22.475530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4c460 (107): Transport endpoint is not connected 00:20:00.615 [2024-07-15 20:56:22.476517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4c460 (9): Bad file descriptor 00:20:00.615 [2024-07-15 20:56:22.477513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:00.615 [2024-07-15 20:56:22.477534] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:00.615 [2024-07-15 20:56:22.477543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:00.615 request: 00:20:00.615 { 00:20:00.615 "name": "nvme0", 00:20:00.615 "trtype": "tcp", 00:20:00.615 "traddr": "127.0.0.1", 00:20:00.615 "adrfam": "ipv4", 00:20:00.615 "trsvcid": "4420", 00:20:00.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.615 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:00.615 "prchk_reftag": false, 00:20:00.615 "prchk_guard": false, 00:20:00.615 "hdgst": false, 00:20:00.615 "ddgst": false, 00:20:00.615 "psk": ":spdk-test:key1", 00:20:00.615 "method": "bdev_nvme_attach_controller", 00:20:00.615 "req_id": 1 00:20:00.615 } 00:20:00.615 Got JSON-RPC error response 00:20:00.615 response: 00:20:00.615 { 00:20:00.615 "code": -5, 00:20:00.615 "message": "Input/output error" 00:20:00.615 } 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@33 -- # sn=336123842 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 336123842 00:20:00.615 1 links removed 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@33 -- # sn=556022977 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 556022977 00:20:00.615 1 links removed 00:20:00.615 20:56:22 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84847 00:20:00.615 20:56:22 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 84847 ']' 00:20:00.872 20:56:22 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 84847 00:20:00.872 20:56:22 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:20:00.872 20:56:22 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84847 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:00.873 killing process with pid 84847 00:20:00.873 Received shutdown signal, test time was about 1.000000 seconds 00:20:00.873 00:20:00.873 Latency(us) 00:20:00.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.873 =================================================================================================================== 00:20:00.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84847' 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@967 -- # kill 84847 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@972 -- # wait 84847 00:20:00.873 20:56:22 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84829 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 84829 ']' 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 84829 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84829 00:20:00.873 killing process with pid 84829 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84829' 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@967 -- # kill 84829 00:20:00.873 20:56:22 keyring_linux -- common/autotest_common.sh@972 -- # wait 84829 00:20:01.439 00:20:01.439 real 0m5.533s 00:20:01.439 user 0m10.008s 00:20:01.439 sys 0m1.595s 00:20:01.439 20:56:23 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.439 20:56:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:01.439 ************************************ 00:20:01.439 END TEST keyring_linux 00:20:01.439 ************************************ 00:20:01.439 20:56:23 -- common/autotest_common.sh@1142 -- # return 0 00:20:01.439 20:56:23 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:20:01.439 20:56:23 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:20:01.439 20:56:23 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:20:01.439 20:56:23 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:20:01.439 20:56:23 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:20:01.439 20:56:23 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:20:01.439 20:56:23 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:20:01.439 20:56:23 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:20:01.439 20:56:23 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:20:01.439 20:56:23 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:20:01.439 20:56:23 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:20:01.439 20:56:23 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:20:01.439 20:56:23 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:20:01.439 20:56:23 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:20:01.439 20:56:23 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:20:01.439 20:56:23 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:20:01.439 20:56:23 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:20:01.439 20:56:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.439 20:56:23 -- common/autotest_common.sh@10 -- # set +x 00:20:01.439 20:56:23 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:20:01.439 20:56:23 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:20:01.439 20:56:23 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:01.439 20:56:23 -- common/autotest_common.sh@10 -- # set +x 00:20:03.970 INFO: APP EXITING 00:20:03.970 INFO: killing all VMs 00:20:03.970 INFO: killing vhost app 00:20:03.970 INFO: EXIT DONE 00:20:04.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:04.538 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:04.538 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:05.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:05.475 Cleaning 00:20:05.475 Removing: /var/run/dpdk/spdk0/config 00:20:05.475 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:05.475 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:05.475 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:05.475 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:05.475 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:05.475 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:05.475 Removing: /var/run/dpdk/spdk1/config 00:20:05.475 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:05.475 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:05.475 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:05.475 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:05.475 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:05.475 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:05.475 Removing: /var/run/dpdk/spdk2/config 00:20:05.475 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:05.475 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:05.475 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:05.475 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:05.475 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:05.475 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:05.475 Removing: /var/run/dpdk/spdk3/config 00:20:05.475 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:05.475 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:05.475 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:05.475 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:05.475 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:05.475 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:05.475 Removing: /var/run/dpdk/spdk4/config 00:20:05.475 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:05.475 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:05.475 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:05.475 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:05.475 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:05.475 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:05.475 Removing: /dev/shm/nvmf_trace.0 00:20:05.475 Removing: /dev/shm/spdk_tgt_trace.pid58781 00:20:05.475 Removing: /var/run/dpdk/spdk0 00:20:05.475 Removing: /var/run/dpdk/spdk1 00:20:05.475 Removing: /var/run/dpdk/spdk2 00:20:05.475 Removing: /var/run/dpdk/spdk3 00:20:05.475 Removing: /var/run/dpdk/spdk4 00:20:05.475 Removing: /var/run/dpdk/spdk_pid58635 00:20:05.475 Removing: /var/run/dpdk/spdk_pid58781 00:20:05.475 Removing: /var/run/dpdk/spdk_pid58979 00:20:05.475 Removing: /var/run/dpdk/spdk_pid59060 00:20:05.475 Removing: /var/run/dpdk/spdk_pid59088 00:20:05.475 Removing: /var/run/dpdk/spdk_pid59197 00:20:05.475 Removing: /var/run/dpdk/spdk_pid59215 00:20:05.475 Removing: /var/run/dpdk/spdk_pid59333 00:20:05.734 Removing: /var/run/dpdk/spdk_pid59518 00:20:05.734 Removing: /var/run/dpdk/spdk_pid59653 00:20:05.734 Removing: /var/run/dpdk/spdk_pid59722 00:20:05.734 Removing: /var/run/dpdk/spdk_pid59794 00:20:05.734 Removing: /var/run/dpdk/spdk_pid59879 00:20:05.734 Removing: /var/run/dpdk/spdk_pid59951 00:20:05.734 Removing: /var/run/dpdk/spdk_pid59989 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60025 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60081 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60197 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60608 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60654 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60705 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60721 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60783 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60799 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60866 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60876 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60922 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60940 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60980 00:20:05.734 Removing: /var/run/dpdk/spdk_pid60998 00:20:05.734 Removing: /var/run/dpdk/spdk_pid61115 00:20:05.734 Removing: /var/run/dpdk/spdk_pid61156 00:20:05.734 Removing: /var/run/dpdk/spdk_pid61225 00:20:05.734 Removing: /var/run/dpdk/spdk_pid61269 00:20:05.734 Removing: /var/run/dpdk/spdk_pid61299 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61361 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61392 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61422 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61461 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61490 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61530 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61559 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61599 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61628 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61663 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61697 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61734 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61768 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61803 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61832 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61872 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61901 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61938 00:20:05.735 Removing: /var/run/dpdk/spdk_pid61976 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62013 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62049 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62119 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62201 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62509 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62531 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62563 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62571 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62592 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62611 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62625 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62640 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62659 00:20:05.735 Removing: /var/run/dpdk/spdk_pid62673 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62688 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62707 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62726 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62736 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62755 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62774 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62790 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62809 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62822 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62838 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62868 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62884 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62919 00:20:05.993 Removing: /var/run/dpdk/spdk_pid62977 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63006 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63016 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63044 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63059 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63061 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63109 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63117 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63151 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63155 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63170 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63174 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63189 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63193 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63208 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63212 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63246 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63267 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63285 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63308 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63323 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63326 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63371 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63387 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63409 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63422 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63424 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63437 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63439 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63452 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63454 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63467 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63536 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63578 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63677 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63710 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63755 00:20:05.993 Removing: /var/run/dpdk/spdk_pid63775 00:20:05.994 Removing: /var/run/dpdk/spdk_pid63792 00:20:05.994 Removing: /var/run/dpdk/spdk_pid63814 00:20:05.994 Removing: /var/run/dpdk/spdk_pid63845 00:20:05.994 Removing: /var/run/dpdk/spdk_pid63861 00:20:05.994 Removing: /var/run/dpdk/spdk_pid63931 00:20:05.994 Removing: /var/run/dpdk/spdk_pid63947 00:20:05.994 Removing: /var/run/dpdk/spdk_pid63991 00:20:05.994 Removing: /var/run/dpdk/spdk_pid64051 00:20:05.994 Removing: /var/run/dpdk/spdk_pid64102 00:20:05.994 Removing: /var/run/dpdk/spdk_pid64126 00:20:05.994 Removing: /var/run/dpdk/spdk_pid64230 00:20:05.994 Removing: /var/run/dpdk/spdk_pid64273 00:20:06.252 Removing: /var/run/dpdk/spdk_pid64305 00:20:06.252 Removing: /var/run/dpdk/spdk_pid64529 00:20:06.252 Removing: /var/run/dpdk/spdk_pid64622 00:20:06.252 Removing: /var/run/dpdk/spdk_pid64651 00:20:06.252 Removing: /var/run/dpdk/spdk_pid64964 00:20:06.252 Removing: /var/run/dpdk/spdk_pid64997 00:20:06.252 Removing: /var/run/dpdk/spdk_pid65281 00:20:06.252 Removing: /var/run/dpdk/spdk_pid65685 00:20:06.252 Removing: /var/run/dpdk/spdk_pid65933 00:20:06.252 Removing: /var/run/dpdk/spdk_pid66691 00:20:06.252 Removing: /var/run/dpdk/spdk_pid67512 00:20:06.252 Removing: /var/run/dpdk/spdk_pid67623 00:20:06.252 Removing: /var/run/dpdk/spdk_pid67690 00:20:06.252 Removing: /var/run/dpdk/spdk_pid68930 00:20:06.252 Removing: /var/run/dpdk/spdk_pid69136 00:20:06.252 Removing: /var/run/dpdk/spdk_pid72134 00:20:06.252 Removing: /var/run/dpdk/spdk_pid72429 00:20:06.252 Removing: /var/run/dpdk/spdk_pid72537 00:20:06.252 Removing: /var/run/dpdk/spdk_pid72665 00:20:06.252 Removing: /var/run/dpdk/spdk_pid72687 00:20:06.252 Removing: /var/run/dpdk/spdk_pid72720 00:20:06.252 Removing: /var/run/dpdk/spdk_pid72742 00:20:06.252 Removing: /var/run/dpdk/spdk_pid72823 00:20:06.252 Removing: /var/run/dpdk/spdk_pid72958 00:20:06.252 Removing: /var/run/dpdk/spdk_pid73099 00:20:06.252 Removing: /var/run/dpdk/spdk_pid73169 00:20:06.252 Removing: /var/run/dpdk/spdk_pid73355 00:20:06.252 Removing: /var/run/dpdk/spdk_pid73434 00:20:06.252 Removing: /var/run/dpdk/spdk_pid73521 00:20:06.252 Removing: /var/run/dpdk/spdk_pid73819 00:20:06.252 Removing: /var/run/dpdk/spdk_pid74208 00:20:06.252 Removing: /var/run/dpdk/spdk_pid74211 00:20:06.252 Removing: /var/run/dpdk/spdk_pid74477 00:20:06.252 Removing: /var/run/dpdk/spdk_pid74495 00:20:06.252 Removing: /var/run/dpdk/spdk_pid74516 00:20:06.252 Removing: /var/run/dpdk/spdk_pid74541 00:20:06.252 Removing: /var/run/dpdk/spdk_pid74546 00:20:06.252 Removing: /var/run/dpdk/spdk_pid74843 00:20:06.252 Removing: /var/run/dpdk/spdk_pid74892 00:20:06.252 Removing: /var/run/dpdk/spdk_pid75166 00:20:06.252 Removing: /var/run/dpdk/spdk_pid75359 00:20:06.252 Removing: /var/run/dpdk/spdk_pid75732 00:20:06.252 Removing: /var/run/dpdk/spdk_pid76224 00:20:06.252 Removing: /var/run/dpdk/spdk_pid76979 00:20:06.252 Removing: /var/run/dpdk/spdk_pid77563 00:20:06.252 Removing: /var/run/dpdk/spdk_pid77565 00:20:06.252 Removing: /var/run/dpdk/spdk_pid79452 00:20:06.252 Removing: /var/run/dpdk/spdk_pid79501 00:20:06.252 Removing: /var/run/dpdk/spdk_pid79562 00:20:06.252 Removing: /var/run/dpdk/spdk_pid79617 00:20:06.252 Removing: /var/run/dpdk/spdk_pid79732 00:20:06.252 Removing: /var/run/dpdk/spdk_pid79787 00:20:06.252 Removing: /var/run/dpdk/spdk_pid79847 00:20:06.252 Removing: /var/run/dpdk/spdk_pid79902 00:20:06.252 Removing: /var/run/dpdk/spdk_pid80205 00:20:06.252 Removing: /var/run/dpdk/spdk_pid81362 00:20:06.252 Removing: /var/run/dpdk/spdk_pid81496 00:20:06.252 Removing: /var/run/dpdk/spdk_pid81744 00:20:06.252 Removing: /var/run/dpdk/spdk_pid82289 00:20:06.252 Removing: /var/run/dpdk/spdk_pid82454 00:20:06.252 Removing: /var/run/dpdk/spdk_pid82611 00:20:06.252 Removing: /var/run/dpdk/spdk_pid82713 00:20:06.511 Removing: /var/run/dpdk/spdk_pid82877 00:20:06.511 Removing: /var/run/dpdk/spdk_pid82986 00:20:06.511 Removing: /var/run/dpdk/spdk_pid83651 00:20:06.511 Removing: /var/run/dpdk/spdk_pid83686 00:20:06.511 Removing: /var/run/dpdk/spdk_pid83721 00:20:06.511 Removing: /var/run/dpdk/spdk_pid83975 00:20:06.511 Removing: /var/run/dpdk/spdk_pid84009 00:20:06.511 Removing: /var/run/dpdk/spdk_pid84040 00:20:06.511 Removing: /var/run/dpdk/spdk_pid84472 00:20:06.511 Removing: /var/run/dpdk/spdk_pid84488 00:20:06.511 Removing: /var/run/dpdk/spdk_pid84716 00:20:06.511 Removing: /var/run/dpdk/spdk_pid84829 00:20:06.511 Removing: /var/run/dpdk/spdk_pid84847 00:20:06.511 Clean 00:20:06.511 20:56:28 -- common/autotest_common.sh@1451 -- # return 0 00:20:06.511 20:56:28 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:20:06.511 20:56:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:06.511 20:56:28 -- common/autotest_common.sh@10 -- # set +x 00:20:06.511 20:56:28 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:20:06.511 20:56:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:06.511 20:56:28 -- common/autotest_common.sh@10 -- # set +x 00:20:06.511 20:56:28 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:06.511 20:56:28 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:06.511 20:56:28 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:06.511 20:56:28 -- spdk/autotest.sh@391 -- # hash lcov 00:20:06.511 20:56:28 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:20:06.511 20:56:28 -- spdk/autotest.sh@393 -- # hostname 00:20:06.770 20:56:28 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:06.770 geninfo: WARNING: invalid characters removed from testname! 00:20:33.348 20:56:52 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:33.606 20:56:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:36.140 20:56:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:38.041 20:56:59 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:40.570 20:57:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:42.520 20:57:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:44.424 20:57:06 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:44.424 20:57:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:44.424 20:57:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:44.424 20:57:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.424 20:57:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.424 20:57:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.424 20:57:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.424 20:57:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.424 20:57:06 -- paths/export.sh@5 -- $ export PATH 00:20:44.424 20:57:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.424 20:57:06 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:44.424 20:57:06 -- common/autobuild_common.sh@444 -- $ date +%s 00:20:44.424 20:57:06 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721077026.XXXXXX 00:20:44.424 20:57:06 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721077026.IpZ07W 00:20:44.424 20:57:06 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:20:44.424 20:57:06 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:20:44.424 20:57:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:44.424 20:57:06 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:44.424 20:57:06 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:44.424 20:57:06 -- common/autobuild_common.sh@460 -- $ get_config_params 00:20:44.424 20:57:06 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:20:44.424 20:57:06 -- common/autotest_common.sh@10 -- $ set +x 00:20:44.424 20:57:06 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:20:44.424 20:57:06 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:20:44.424 20:57:06 -- pm/common@17 -- $ local monitor 00:20:44.424 20:57:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:44.424 20:57:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:44.424 20:57:06 -- pm/common@21 -- $ date +%s 00:20:44.424 20:57:06 -- pm/common@25 -- $ sleep 1 00:20:44.424 20:57:06 -- pm/common@21 -- $ date +%s 00:20:44.424 20:57:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721077026 00:20:44.424 20:57:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721077026 00:20:44.684 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721077026_collect-vmstat.pm.log 00:20:44.684 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721077026_collect-cpu-load.pm.log 00:20:45.655 20:57:07 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:20:45.655 20:57:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:20:45.655 20:57:07 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:45.655 20:57:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:20:45.655 20:57:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:20:45.655 20:57:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:20:45.655 20:57:07 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:45.655 20:57:07 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:20:45.655 20:57:07 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:45.655 20:57:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:20:45.655 20:57:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:20:45.655 20:57:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:45.655 20:57:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:45.655 20:57:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:45.655 20:57:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:20:45.655 20:57:07 -- pm/common@44 -- $ pid=86604 00:20:45.655 20:57:07 -- pm/common@50 -- $ kill -TERM 86604 00:20:45.655 20:57:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:45.655 20:57:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:20:45.655 20:57:07 -- pm/common@44 -- $ pid=86606 00:20:45.655 20:57:07 -- pm/common@50 -- $ kill -TERM 86606 00:20:45.655 + [[ -n 5104 ]] 00:20:45.655 + sudo kill 5104 00:20:45.665 [Pipeline] } 00:20:45.687 [Pipeline] // timeout 00:20:45.693 [Pipeline] } 00:20:45.713 [Pipeline] // stage 00:20:45.720 [Pipeline] } 00:20:45.738 [Pipeline] // catchError 00:20:45.749 [Pipeline] stage 00:20:45.752 [Pipeline] { (Stop VM) 00:20:45.768 [Pipeline] sh 00:20:46.050 + vagrant halt 00:20:49.352 ==> default: Halting domain... 00:20:54.620 [Pipeline] sh 00:20:54.897 + vagrant destroy -f 00:20:58.225 ==> default: Removing domain... 00:20:58.237 [Pipeline] sh 00:20:58.519 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:58.526 [Pipeline] } 00:20:58.537 [Pipeline] // stage 00:20:58.542 [Pipeline] } 00:20:58.556 [Pipeline] // dir 00:20:58.560 [Pipeline] } 00:20:58.572 [Pipeline] // wrap 00:20:58.576 [Pipeline] } 00:20:58.586 [Pipeline] // catchError 00:20:58.592 [Pipeline] stage 00:20:58.593 [Pipeline] { (Epilogue) 00:20:58.602 [Pipeline] sh 00:20:58.878 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:04.199 [Pipeline] catchError 00:21:04.201 [Pipeline] { 00:21:04.217 [Pipeline] sh 00:21:04.499 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:04.499 Artifacts sizes are good 00:21:04.508 [Pipeline] } 00:21:04.527 [Pipeline] // catchError 00:21:04.540 [Pipeline] archiveArtifacts 00:21:04.547 Archiving artifacts 00:21:04.707 [Pipeline] cleanWs 00:21:04.718 [WS-CLEANUP] Deleting project workspace... 00:21:04.718 [WS-CLEANUP] Deferred wipeout is used... 00:21:04.725 [WS-CLEANUP] done 00:21:04.727 [Pipeline] } 00:21:04.748 [Pipeline] // stage 00:21:04.755 [Pipeline] } 00:21:04.773 [Pipeline] // node 00:21:04.780 [Pipeline] End of Pipeline 00:21:04.815 Finished: SUCCESS